Previously, we would asssert that the lockfiles are consistent during the
unpackPhase, but if the pkg has a patch for the lockfile itself then we must
wait until the patchPhase is complete to check.
This also removes an implicity dependency on the src attribute coming from
`fetchzip` / `fetchFromGitHub`, which happens to name the source directory
"source". Now we glob for it, so different fetchers will work consistently.
This is useful when buildLayeredImage is called in a generic way
that should allow simple (base) images to be built, which may not
reference any store paths.
If we just want to write a non-compiled script (e.g. writeDash), it’s
usually a lot faster just doing it locally. That’s what
`runCommandLocal` was introduced for, so let’s use it in `writers`.
When the `paths` argument is too big `symlinkJoin` will fail with:
```
while setting up the build environment: executing '/nix/store/rm1hz1lybxangc8sdl7xvzs5dcvigvf7-bash-4.4-p23/bin/bash': Argument list too long
```
This is fixed by passing `paths` as a file instead of as an
environment variable.
`git repack` and `git gc` sometimes print “Nothing new to pack.”
to stdout, which breaks redirecting output to JSON file.
Let’s move the stdout of all git calls where it is not used to stderr
so that we still receive the info but it does not pollute our output.
Fixes#78744
My previous change broke when there are more packages than the maximum
number of layers. I had assumed that the `store-path-to-layer.sh` was
only ever passed a single store path, but that is not the case if
there are multiple packages going into the final layer. To fix this, we
loop through the paths going into the final layer, appending them to the
tar file and making sure they end up at the right path.
Changes the default fetcher in the Rust Platform to be the newer
`fetchCargoTarball`, and changes every application using the current default to
instead opt out.
This commit does not change any hashes or cause any rebuilds. Once integrated,
we will start deleting the opt-outs and recomputing hashes.
See #79975 for details.
The readme was nice to discuss in the implementation PR, but now that this is
merged it's better to have an issue that can be linked against in PRs and
doesn't require further merges to update status.
Ported with a status update in #79975
By overriding each dependency on every level of the dependency tree we
are creating a lot of unnecessary instances of the same derivation
Looking at the output size of `nix-instantiate --trace-function-calls
-vvvv …` and the execution time I got about a 10x improvement after
applying this change.
It was probably good intentions that lead to these overrides but in
practice no tooling (that I know of) really needs this. `carnix` and
`crate2nix` are fine without those overrides. Furthermore I believe that
it is the job of the tooling around `buildRustCrate` to provide a
coherent set of overrides. By not enforcing all of the overrides, debug
flags, verbosity, … to be the same throughout the closure we also allow
consumers to override specific aspects of the crates. Some (older?)
crates might need different `crateOverrides` then newer crates with the
same name. Currently such situations can not (easily) be implemented
with the override in-place.
This has several advantages:
1. It takes up less space on disk in-between builds in the nix store.
2. It uses less space in the binary cache for vendor derivation packages.
3. It uses less network traffic downloading from the binary cache.
4. It plays nicely with hashed mirrors like tarballs.nixos.org, which only
substitute --flat hashes on single files (not recursive directory hashes).
5. It's consistent with how simple `fetchurl` src derivations work.
6. It provides a stronger abstraction between input src-package and output
package, e.g., it's harder to accidentally depend on the src derivation at
runtime by referencing something like `${src}/etc/index.html`. Likewise, in
the store it's harder to get confused with something that is just there as a
build-time dependency vs. a runtime dependency, since the build-time
src dependencies are tarred up.
Disadvantages are:
1. It takes slightly longer to untar at the start of a build.
As currently implemented, this attaches the compacted vendor.tar.gz feature as a
rider on `verifyCargoDeps`, since both of them are relatively newly implemented
behavior that change the `cargoSha256`.
If this PR is accepted, I will push forward the remaining rust packages with a
series of treewide PRs to update the `cargoSha256`s.
Since a layer is reserved for "customization", the image can not
contains less than 2 layers.
The user gets the following message at evaluation:
nix-instantiate nixos/tests/docker-tools.nix
trace: the maxLayers argument of dockerTools.buildLayeredImage function must be greather than 1 (current value: 1)
Building a docker image with darwin binaries just yields a confusing
error when ran:
standard_init_linux.go:211: exec user process caused "exec format error"
This change prevents people from building such images in the first place
Previously I did use `runCommand` to do the same. Using
releaseTools.aggregate seems a lot saner and we might get nicer hydra
output of the tests that are failing.
It used to be the case (ref missing) that cargo did treat
`src/$libName.rs` as an alternative to `src/lib.rs` when the latter
wasn't present. Recently I failed to reproduce that with vanilla cargo
and it started to cause pain with some crates of the form:
some_crate/
`- src
`- main.rs
`- some_crate.rs
We would build `src/some_crate.rs` and thing it is a library while that
might not be the actual case. This crate is a valid `bin` crate not a
`lib` crate as far as I can tell from the samples I took.
I removed support for the previously required heuristic and commented
out the test cases in case we will need them again. We could crawl in
the Git history but chances are that the next person looking into this
doesn't know about the history.
Naive concatenation of $LD_LIBRARY_PATH can result in an empty
colon-delimited segment; this tells glibc to load libraries from the
current directory, which is definitely wrong, and may be a security
vulnerability if the current directory is untrusted. (See #67234, for
example.) Fix this throughout the tree.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
While looking at the graph of all the outputs in my personal binary
cache it became obvious that we have a lot of self references within the
package set. That isn't an isuse by itself. However it increases the
size of the binary cache for every (reproducible) build of a package
that carries references to itself. You can no longer deduplicate the
outputs since they are all unique. One of the ways to get rid of (a few)
references is to rewrite all the symlinks that are currently used to be
relative symlinks. Two build of something that didn't really change but
carries a self-reference can the be store as the same NAR file again.
I quickly hacked together this change to see if that would yield and
success. My bash scripting skills are probably not great but so far it
seem to somewhat work.
When this fails, the user may want to copy-paste the path to the "bad"
Cargo.lock file to inspect. The trailing `.` on `$cargoDeps.` gets caught in
most terminal copy-pastes. Since half the lines already don't have it, this
removes it from all of them for consistent output.
This helps us instruct rustc to build tests instead of binaries. The
actual build will then ONLY produce test executables. This is a first
step towards having rust crate tests within nixpkgs.
We default back to only a single output in test cases since that is the
only reasonable thing to do here.
Producing libraries or binaries in addition to tests would theoretically
be feasible but usually generates different dependency trees. It is very
common to have some libraries in `[dev-depdendencies]` within Cargo.toml
just for your tests. To not start mixing things up going with a
dedicated derivation for the test build sounds like the best choice for
now.
To use this you must provide a proper test dependency chain to
`buildRustCrate` (as you would usually do with your non-test inputs).
And then set the `buildTests` attribute to `true`. The derivation will
then contain all tests that were built in `$out/tests`. All common test
patterns and directories should be supported and tested by this change.
Below is an example how you would run a single test from the derivation.
This commit contains some more examples in the `buildRustCrateTests`
attribute set that might be helpful.
```
let
drv = buildRustCrate {
…
buildTests true;
};
in runCommand "test-my-crate" {} ''
touch $out
exec ${drv}/tests/my-test
''
```
While unifying most of the lib function calls I accidentially changed
the filterSource functions as well. Since there were no tests I ended
up forgetting about this case (even thought I ran into it…).
when tar'ing store paths into layered archives when building layered
images, don't use the absolute nix store path so that tar won't complain
if something new is added to the nix store
when building the final docker image, ignore any file changes tar
detects in the layers. they are all immutable and the only thing that
might change is the number of hard links due to store optimization
Most stdenv wrappers already work like this -- it allows greater
customisation. We just have to be careful to remove arguments we're
using that shouldn't be passed to stdenv. I've been conservative
here, because fetchcargo checksums shouldn't change lightly.
Before, every docker image had three extra layers:
1. A `closure` layer which is an internal implementation detail of
calculating the closure of the container
2. a `name-config.json` layer which is the images' run-time
configuration, and has no business being *in* the image as a layer.
3. a "bulk-layers" layer which is again and implementation detail
around collecting the image's closure.
None of these layers need to be in the final product.
This allows things like hooks other than postInstall to be passed
through to mkDerivation, which is very useful when customising or
debugging a package.
This reduces the size of Hello World [1] from 3.06 MiB to 678 KiB.
[1] As measured by nix-shell -p 'writers.writeHaskellBin "hello" {} "main = putStrLn \"hello\""' --run 'ls -l `which hello`'
The previous lines were only different in the kind of dependencies but
otherwise exactly the same. It makes the entire thing a bit more
readable by moving this into a function that takes care of this.
We can get rid of a bunch of workarounds that were in the build script
before by just passing on the `crateBin` attribute.
Before we converted the list of attributes to a string only to convert
it back in bash during the build phase. We can do the entire looping
through builds in Nix and thus need no conversion and parsing of
attributes over and over again.
The big part that still remains bash is the heuristic that cargo
introduced and that we can't do at eval time.
That code had been in the derivation for a while but no explanation was
given why that is needed. It might be helpful to our future selfs to
document why things are done the way they are.
The expression is already long and confusing enough without the color
stuff sprinkled in. Moving it to a dedicated file makes sense.
I switched a bit of the color support code to pure Nix since there
wasn't much point in doing that in bash while we can just do it in Nix.
We can just use `lib` instead of `builtins` in all cases but the
`hashString` case. Also changed a few lines to make use of some optional
helpers from lib.
Adding empty variables can lead to this problem:
```diff
wrapProgram \
./pye_menu_shell \
--prefix PATH : /nix/store/4c3z5r6yxsf2cxwwyazhdn92xixn4j5b-python3-3.7.5/bin:/nix/store/b3l3niilvqcxcsbxmd6sgqk1dy1rk81c-pye-menu-1.0/bin:/nix/store/y8j1cfj8d9r5rbbxc22w7hnfjw5f4fd3-cairo-1.16.0-dev/bin:/nix/store/6mg7lfbdh9pgx7pbxr3544qqbrigdl1q-freetype-2.10.1-dev/bin:/nix/store/gpszqcy0xi0lavbbjdq82zkkjp3jbp2a-bzip2-1.0.6.0.1-bin/bin:/nix/store/031c5pk5lzabgmpqpyd46hzi625as6bp-libpng-apng-1.6.37-dev/bin:/nix/store/f8kl7kmpv130aw9zm542p74a3hg0yc13-fontconfig-2.12.6-bin/bin:/nix/store/bqp30vkncmm222mjvwggz0s7p318sflj-expat-2.2.7-dev/bin:/nix/store/w57xa8g4s4aviwmqwgra7m5hwj2b005m-glib-2.60.7-dev/bin:/nix/store/v5d4966ahvfir2hwpv003022f3pb7vik-gettext-0.19.8.1/bin:/nix/store/qpvxhl1jr0fxnrx9idnpdagqs00m5m2z-glib-2.60.7/bin \
--set PYTHONNOUSERSITE true \
--set GDK_PIXBUF_MODULE_FILE /nix/store/7ddlakx6xjczqbfs80xjd14f30fzadws-gdk-pixbuf-2.38.1/lib/gdk-pixbuf-2.0/2.10.0/loaders.cache \
--prefix XDG_DATA_DIRS : /nix/store/0snjc1qg89zqn3v35l9d55xrykh9nj5c-gtk+3-3.24.10/share/gsettings-schemas/gtk+3-3.24.10:/nix/store/b41z51vdv11n6df8ki5vj8dynxw98f9l-gsettings-desktop-schemas-3.32.0/share/gsettings-schemas/gsettings-desktop-schemas-3.32.0:/nix/store/0snjc1qg89zqn3v35l9d55xrykh9nj5c-gtk+3-3.24.10/share/gsettings-schemas/gtk+3-3.24.10 \
- --prefix GST_PLUGIN_SYSTEM_PATH_1_0 : \
+ --prefix GST_PLUGIN_SYSTEM_PATH_1_0 : "" \
--prefix GI_TYPELIB_PATH : /nix/store/0snjc1qg89zqn3v35l9d55xrykh9nj5c-gtk+3-3.24.10/lib/girepository-1.0:/nix/store/z29l5xaaxh1s0697mcldj71ab0zshry1-librsvg-2.44.15/lib/girepository-1.0:/nix/store/pija1xzm7izxfb5m2hvhvlwp1l38ffxa-gobject-introspection-1.60.2/lib/girepository-1.0 \
- --prefix GRL_PLUGIN_PATH :
+ --prefix GRL_PLUGIN_PATH : ""
```
Where the diff is to highlight the problem: we don't have a valid value
for GST_PLUGIN_SYSTEM_PATH_1_0 or GRL_PLUGIN_PATH, and instead of
passing the empy string, the empty string gets unquoted somewhere, so we
end up passing no arguments, thus the parser in wrapProgram takes
--prefix as the argument of GST_PLUGIN_SYSTEM_PATH_1_0, and then
GI_TYPELIB_PATH is missing it's --prefix so wrapProgram complains/dies.
The easiest change is to not add empty arguments to the wrapper
When updating to cpio-2.13 in fe758f5fa3,
a patch from SUSE was dropped. This patch was intended to resolve
CVE-2015-1197, and introduced the '--extract-over-symlink' option to
disable its own effects.
The CVE-2015-1197 was fixed in cpio-2.13 release[1] by other means,
making this patch useless.
Given that this patch is no longer used, we do not need to disable its
effects anymore with the `--extract-over-symlink` argument switch.
This Commit fixes#74984
[1] https://lists.gnu.org/archive/html/info-gnu/2019-11/msg00002.html
depmod looks for files modules.order and modules.builtin which are
generated at kernel build time but were previously not passed to
the modules-shrunk derivation
dockerTools.buildImageWithNixDb: export USER
Changes to Nix user detection (./src/nix-channel/nix-channel.cc#L-166)
cause this function to error. Exporting USER fixes this.
I haven't been doing any maintenance for a long time now and not only
do I get notified, it also creates a fake impression that all these
packages had at least one maintainer when in practice they had none.
We shouldn’t force the user to have a C compiler in scope, just
because the derivation is forced to build locally. That can’t be
counted as “lightweight” anymore.
Co-Authored-By: Silvan Mosberger<contact@infinisil.com>
A definition I’ve been copy-pasting everywhere so far, so it’s finally
time to add it to nixpkgs.
I’m using a remote builder for my regular nix builds, so trivial
`runCommand`s which first try a substitution and then copy the inputs
to the builder to run for 0.2s are quite noticable.
If we just always build these, we gain some build time, so let’s make
it easy to switch from remote to local.
This has the same motivation as fetchFromGitHub/fetchFromGitLab --
it's cheaper to download a tarball of a single revision than it is to
download a whole history.
I could have gone with domain/group/repo, like fetchFromGitLab, but it
would have made implementation more difficult, and this syntax means
it's a drop-in replacement for fetchgit, so I decided it wasn't worth
it.
tensorflow assumes $USER to be set to something, otherwise it complains
like this:
```
FATAL: $USER is not set, and unable to look up name of current user: (error: 0): Success
Traceback (most recent call last):
File "./configure.py", line 1602, in <module>
main()
File "./configure.py", line 1399, in main
_TF_MAX_BAZEL_VERSION)
File "./configure.py", line 478, in check_bazel_version
['bazel', '--batch', '--bazelrc=/dev/null', 'version'])
File "./configure.py", line 156, in run_shell
output = subprocess.check_output(cmd)
File "/nix/store/drr8qcgiccfc5by09r5zc30flgwh1mbx-python3-3.7.5/lib/python3.7/subprocess.py", line 411, in check_output
**kwargs).stdout
File "/nix/store/drr8qcgiccfc5by09r5zc30flgwh1mbx-python3-3.7.5/lib/python3.7/subprocess.py", line 512, in run
output=stdout, stderr=stderr)
```
Spotted while changing the hash of its fixed-output derivation on
purpose.
We could also set this in the tensorflow-specific part, but very likely,
other programs will fail as well.
This cuts down the dependency tree on some rust builds where a crate not
just exposes a binary but also a library. `$out/lib` contained a bunch
of extra support files that among other information carry linker flags
(including the full path to link-time dependencies). Worst case this led
to some binary outputs depending on the full build closure of rust
crates.
Moving all the `$out/lib` files to `$lib/lib` solves this nicely.
`lib` might be a bit weird here as they are most of the time just rlib
files (rust libraries). Those are essential only required during
compilation but they can also be shared objects (like with traditional
C-style packages). Which is why I went with `lib` for the new output.
One of the caveats we are running into here is that we do not (always)
know ahead of time of a crate produces just a library or just a binary.
Cargo allows for some ambiguity regarding whether or not a crate
provides one, two, … binaries and libraries as it's outputs. Ideally we
would be able to rely on the `crateType` entirely but so far that isn't
the case. More work on that area might show how difficult that actually
is.
If an empty string is passed to `-idirafter`, it breaks gcc. This commit makes
the stdenv less fragile by expanding out the shell glob and ensuring no empty
arguments get passed.
Before, we'd always use `cc = null`, and check for that. The problem is
this breaks for cross compilation to platforms that don't support a C
compiler.
It's a very subtle issue. One might think there is no problem because we
have `stdenvNoCC`, and presumably one would only build derivations that
use that. The problem is that one still wants to use tools at build-time
that are themselves built with a C compiler, and those are gotten via
"splicing". The runtime version of those deps will explode, but the
build time / `buildPackages` versions of those deps will be fine, and
splicing attempts to work this by using `builtins.tryEval` to filter out
any broken "higher priority" packages (runtime is the default and
highest priority) so that both `foo` and `foo.nativeDrv` works.
However, `tryEval` only catches certain evaluation failures (e.g.
exceptions), and not arbitrary failures (such as `cc.attr` when `cc` is
null). This means `tryEval` fails to let us use our build time deps, and
everything comes apart.
The right solution is, as usually, to get rid of splicing. Or, baring
that, to make it so `foo` never works and one has to explicitly do
`foo.*`. But that is a much larger change, and certaily one unsuitable
to be backported to stable.
Given that, we instead make an exception-throwing `cc` attribute, and
create a `hasCC` attribute for those derivations which wish to
condtionally use a C compiler: instead of doing `stdenv.cc or null ==
null` or something similar, one does `stdenv.hasCC`. This allows quering
without "tripping" the exception, while also allowing `tryEval` to work.
No platform without a C compiler is yet wired up by default. That will
be done in a following commit.
Go beyond the obvious setup hooks now, with a bit of sed, with a skipped case:
- cc-wrapper's `dontlink`, because it already is handled.
Also, in nix files escaping was manually added.
EMP
Using wrapProgram adds a call to `bash` around every call
of `execline`, which clearly misses the basic idea behind
`execline` in the first place …
This reverts commit b64d25c447.
One issue with cargoSha256 is that it's hard to detect when it needs to
be updated or not. It's possible to upgrade a package and forget to
update cargoSha256 and run with old versions of the program or
libraries.
This commit introduces `verifyCargoDeps` which, when enabled, will check
that the Cargo.lock is not out of date in the cargoDeps by comparing it
with the package source.
Quoting from the splitString docstring:
NOTE: this function is not performant and should never be used.
This replaces trivial uses of splitString for splitting version
strings with the (potentially builtin) splitVersion.
There is a bug in this feature: It allows extra arguments to leak in
from the environment. For example:
$ export extraFlagsArray=date
$ man ls
Note that you get the man page for date rather than for ls. This happens
because 'man' happens to use a wrapper (to add groff to its PATH).
An attempt to fix this was made in 5ae18574fc in PR #19328 for
issue #2537, but 1. That change didn't actually fix the problem because
it addressed makeWrapper's environment during the build process, not the
constructed wrapper script's environment after installation, and 2. That
change was apparently accidentally lost when merged with 7ff6eec5fd.
Rather than trying to fix the bug again, we remove the extraFlagsArray
feature, since it has never been used in the public repo in the ten
years it has been available.
wrapAclocal continues to use its own, separate flavor of extraFlagsArray
in a more limited context. The analogous bug there was fixed in
4d7d10da6b in 2011.
This removes the unnecessary compiler build dependency. We also set
preferLocalBuild = true;
allowSubstitutes = false;
to not farm out the build on a remote builder or bother with trying to
find a binary substitution.
The architecture of an image should default to the architecture for
which that image is being composed or pulled. buildPackages.go.GOARCH is
an easy way to compute that architecture with the correct terminology.
These should not cause issues in practice but it is good idea to handle them.
* prefix and targetOffset are mandatory, as they are always set by the generic builder.
* wrapPrefixVariables and dontWrapGApps are now defaulting to empty value, as they are not mandatory.
Before this change, buildRustCrate always called rustc with
--extern libName=[...]libName[...]
However, Cargo permits using a different name under which a dependency
is known to a crate. For example, rand 0.7.0 uses:
[dependencies]
getrandom_package = { version = "0.1.1", package = "getrandom", optional = true }
Which introduces the getrandom dependency such that it is known as
getrandom_package to the rand crate. In this case, the correct extern
flag is of the form
--extern getrandom_package=[...]getrandom[...]
which is currently not supported. In order to support such cases, this
change introduces a crateRenames argument to buildRustCrate. This
argument is an attribute set of dependencies that should be renamed. In
this case, crateRenames would be:
{
"getrandom" = "getrandom_package";
}
The extern options are then built such that if the libName occurs as
an attribute in this set, it value will be used as the local
name. Otherwise libName will be used as before.
luarocks defines by default the following mirrors:
83093e7da7/src/luarocks/core/cfg.lua (L205)
Let's add them to nixpkgs. I have modified luarocks-nix to generate the
proper nixpkgs urls.
I bump luarocks-nix in the following commits.
This is a new package that provides a shell hook to make it easy to
declare manpages and shell completions in a manner that doesn't require
remembering where to actually install them. Basic usage looks like
{ stdenv, installShellFiles, ... }:
stdenv.mkDerivation {
# ...
nativeBuildInputs = [ installShellFiles ];
postInstall = ''
installManPage doc/foobar.1
installShellCompletion --bash share/completions/foobar.bash
installShellCompletion --fish share/completions/foobar.fish
installShellCompletion --zsh share/completions/_foobar
'';
# ...
}
See source comments for more details on the functions.
applyPatches applies a list of patches to a source directory.
For example to patch nixpkgs you can use:
applyPatches {
src = pkgs.path;
patches = [
(pkgs.fetchpatch {
url = "1f770d2055.patch";
sha256 = "1nlzx171y3r3jbk0qhvnl711kmdk57jlq4na8f8bs8wz2pbffymr";
})
];
}
There ver very many conflicts, basically all due to
name -> pname+version. Fortunately, almost everything was auto-resolved
by kdiff3, and for now I just fixed up a couple evaluation problems,
as verified by the tarball job. There might be some fallback to these
conflicts, but I believe it should be minimal.
Hydra nixpkgs: ?compare=1538299
They always can be regenerated during the actual build, and they are sometimes
random, e.g in Tensorflow;
platforms -> NIX_BUILD_TOP/tmp/install/35282f5123611afa742331368e9ae529/_embedded_binaries/platforms
This setup hook modifies a Perl script so that any "-I" flags in its shebang
line are rewritten into a "use lib ..." statement on the next line. This gets
around a limitation in Darwin, which will not properly handle a script whose
shebang line exceeds 511 characters.