mirror of
https://github.com/NixOS/nixpkgs.git
synced 2024-12-29 09:04:17 +00:00
fbbe972898
Motivated by ofborg struggling [1] and its evaluations taking too long, inspired by Jörg's initial PR [2] and Adam's previous attempt to parallelise Nixpkgs evaluation [3], this PR contains initial work to relief ofborg from its evaluation duty by using GitHub Actions to evaluate Nixpkgs. For now this doesn't take care of all of what ofborg does, such as requesting appropriate reviewers or labeling mass rebuilds, but this can be follow-up work. [1]: https://discourse.nixos.org/t/infrastructure-announcement-the-future-of-ofborg-your-help-needed/56025?u=infinisil [2]: https://github.com/NixOS/nixpkgs/pull/352808 [3]: https://github.com/NixOS/nixpkgs/pull/269403 Co-Authored-By: Jörg Thalheim <joerg@thalheim.io> Co-Authored-By: Adam Joseph <adam@westernsemico.com>
20 lines
1.4 KiB
Markdown
20 lines
1.4 KiB
Markdown
# Nixpkgs CI evaluation
|
|
|
|
The code in this directory is used by the [eval.yml](../../.github/workflows/eval.yml) GitHub Actions workflow to evaluate the majority of Nixpkgs for all PRs, effectively making sure that when the development branches are processed by Hydra, no evaluation failures are encountered.
|
|
|
|
Furthermore it also allows local evaluation using
|
|
```
|
|
nix-build ci -A eval.full \
|
|
--max-jobs 4
|
|
--cores 2
|
|
--arg chunkSize 10000
|
|
```
|
|
|
|
- `--max-jobs`: The maximum number of derivations to run at the same time. Only each [supported system](../supportedSystems.nix) gets a separate derivation, so it doesn't make sense to set this higher than that number.
|
|
- `--cores`: The number of cores to use for each job. Recommended to set this to the amount of cores on your system divided by `--max-jobs`.
|
|
- `chunkSize`: The number of attributes that are evaluated simultaneously on a single core. Lowering this decreases memory usage at the cost of increased evaluation time. If this is too high, there won't be enough chunks to process them in parallel, and will also increase evaluation time.
|
|
|
|
A good default is to set `chunkSize` to 10000, which leads to about 3.6GB max memory usage per core, so suitable for fully utilising machines with 4 cores and 16GB memory, 8 cores and 32GB memory or 16 cores and 64GB memory.
|
|
|
|
Note that 16GB memory is the recommended minimum, while with less than 8GB memory evaluation time suffers greatly.
|