2017-08-27 17:42:21 +00:00
|
|
|
ninjaBuildPhase() {
|
|
|
|
runHook preBuild
|
|
|
|
|
2018-11-16 22:22:11 +00:00
|
|
|
local buildCores=1
|
2017-12-04 15:15:58 +00:00
|
|
|
|
2018-11-16 22:22:11 +00:00
|
|
|
# Parallel building is enabled by default.
|
|
|
|
if [ "${enableParallelBuilding-1}" ]; then
|
|
|
|
buildCores="$NIX_BUILD_CORES"
|
|
|
|
fi
|
2017-12-04 15:15:58 +00:00
|
|
|
|
2018-11-16 22:22:11 +00:00
|
|
|
local flagsArray=(
|
treewide: drop -l$NIX_BUILD_CORES
Passing `-l$NIX_BUILD_CORES` improperly limits the overall system load.
For a build machine which is configured to run `$B` builds where each
build gets `total cores / B` cores (`$C`), passing `-l $C` to make will
improperly limit the load to `$C` instead of `$B * $C`.
This effect becomes quite pronounced on machines with 80 cores, with
40 simultaneous builds and a cores limit of 2. On a machine with this
configuration, Nix will run 40 builds and make will limit the overall
system load to approximately 2. A build machine with this many cores
can happily run with a load approaching 80.
A non-solution is to oversubscribe the machine, by picking a larger
`$C`. However, there is no way to divide the number of cores in a way
which fairly subdivides the available cores when `$B` is greater than
1.
There has been exploration of passing a jobserver in to the sandbox,
or sharing a jobserver between all the builds. This is one option, but
relatively complicated and only supports make. Lots of other software
uses its own implementation of `-j` and doesn't support either `-l` or
the Make jobserver.
For the case of an interactive user machine, the user should limit
overall system load using `$B`, `$C`, and optionally systemd's
cpu/network/io limiting features.
Making this change should significantly improve the utilization of our
build farm, and improve the throughput of Hydra.
2022-09-22 15:17:14 +00:00
|
|
|
-j$buildCores
|
2018-11-16 22:22:11 +00:00
|
|
|
$ninjaFlags "${ninjaFlagsArray[@]}"
|
|
|
|
)
|
2017-08-27 17:42:21 +00:00
|
|
|
|
2018-11-16 22:22:11 +00:00
|
|
|
echoCmd 'build flags' "${flagsArray[@]}"
|
2022-06-25 14:37:31 +00:00
|
|
|
TERM=dumb ninja "${flagsArray[@]}"
|
2017-08-27 17:42:21 +00:00
|
|
|
|
|
|
|
runHook postBuild
|
|
|
|
}
|
|
|
|
|
2018-11-16 22:22:11 +00:00
|
|
|
ninjaCheckPhase() {
|
|
|
|
runHook preCheck
|
|
|
|
|
|
|
|
if [ -z "${checkTarget:-}" ]; then
|
2018-11-19 00:07:56 +00:00
|
|
|
if ninja -t query test >/dev/null 2>&1; then
|
2018-11-16 22:22:11 +00:00
|
|
|
checkTarget=test
|
|
|
|
fi
|
|
|
|
fi
|
|
|
|
|
|
|
|
if [ -z "${checkTarget:-}" ]; then
|
2018-11-18 01:08:47 +00:00
|
|
|
echo "no test target found in ninja, doing nothing"
|
2018-11-16 22:22:11 +00:00
|
|
|
else
|
|
|
|
local buildCores=1
|
|
|
|
|
|
|
|
if [ "${enableParallelChecking-1}" ]; then
|
|
|
|
buildCores="$NIX_BUILD_CORES"
|
|
|
|
fi
|
|
|
|
|
|
|
|
local flagsArray=(
|
treewide: drop -l$NIX_BUILD_CORES
Passing `-l$NIX_BUILD_CORES` improperly limits the overall system load.
For a build machine which is configured to run `$B` builds where each
build gets `total cores / B` cores (`$C`), passing `-l $C` to make will
improperly limit the load to `$C` instead of `$B * $C`.
This effect becomes quite pronounced on machines with 80 cores, with
40 simultaneous builds and a cores limit of 2. On a machine with this
configuration, Nix will run 40 builds and make will limit the overall
system load to approximately 2. A build machine with this many cores
can happily run with a load approaching 80.
A non-solution is to oversubscribe the machine, by picking a larger
`$C`. However, there is no way to divide the number of cores in a way
which fairly subdivides the available cores when `$B` is greater than
1.
There has been exploration of passing a jobserver in to the sandbox,
or sharing a jobserver between all the builds. This is one option, but
relatively complicated and only supports make. Lots of other software
uses its own implementation of `-j` and doesn't support either `-l` or
the Make jobserver.
For the case of an interactive user machine, the user should limit
overall system load using `$B`, `$C`, and optionally systemd's
cpu/network/io limiting features.
Making this change should significantly improve the utilization of our
build farm, and improve the throughput of Hydra.
2022-09-22 15:17:14 +00:00
|
|
|
-j$buildCores
|
2018-11-16 22:22:11 +00:00
|
|
|
$ninjaFlags "${ninjaFlagsArray[@]}"
|
|
|
|
$checkTarget
|
|
|
|
)
|
|
|
|
|
|
|
|
echoCmd 'check flags' "${flagsArray[@]}"
|
2022-06-25 14:37:31 +00:00
|
|
|
TERM=dumb ninja "${flagsArray[@]}"
|
2018-11-16 22:22:11 +00:00
|
|
|
fi
|
|
|
|
|
|
|
|
runHook postCheck
|
|
|
|
}
|
|
|
|
|
2022-09-10 19:19:58 +00:00
|
|
|
ninjaInstallPhase() {
|
|
|
|
runHook preInstall
|
|
|
|
|
2023-02-21 19:50:48 +00:00
|
|
|
local buildCores=1
|
|
|
|
|
|
|
|
# Parallel building is enabled by default.
|
|
|
|
if [ "${enableParallelInstalling-1}" ]; then
|
|
|
|
buildCores="$NIX_BUILD_CORES"
|
|
|
|
fi
|
|
|
|
|
2022-09-10 19:19:58 +00:00
|
|
|
# shellcheck disable=SC2086
|
|
|
|
local flagsArray=(
|
2023-02-21 19:50:48 +00:00
|
|
|
-j$buildCores
|
2022-09-10 19:19:58 +00:00
|
|
|
$ninjaFlags "${ninjaFlagsArray[@]}"
|
|
|
|
${installTargets:-install}
|
|
|
|
)
|
|
|
|
|
|
|
|
echoCmd 'install flags' "${flagsArray[@]}"
|
|
|
|
TERM=dumb ninja "${flagsArray[@]}"
|
|
|
|
|
|
|
|
runHook postInstall
|
|
|
|
}
|
|
|
|
|
|
|
|
if [ -z "${dontUseNinjaBuild-}" -a -z "${buildPhase-}" ]; then
|
|
|
|
buildPhase=ninjaBuildPhase
|
|
|
|
fi
|
|
|
|
|
2019-10-31 17:59:18 +00:00
|
|
|
if [ -z "${dontUseNinjaCheck-}" -a -z "${checkPhase-}" ]; then
|
2018-11-16 22:22:11 +00:00
|
|
|
checkPhase=ninjaCheckPhase
|
|
|
|
fi
|
2022-09-10 19:19:58 +00:00
|
|
|
|
|
|
|
if [ -z "${dontUseNinjaInstall-}" -a -z "${installPhase-}" ]; then
|
|
|
|
installPhase=ninjaInstallPhase
|
|
|
|
fi
|