Previously, we stored the tarballs from the hosted Git providers directly in the cache. However, as we've seen with `fetchFromGitHub` etc, these files may change subtly.
Given this, this commit repacks the dependencies before storing them in the cache.
A `package-lock.json` file can contain multiple instances of the same dependency, which caused unnecessary downloads and duplicate index entries in the generated cache.
The `sparseCheckout` argument allows the user to specify directories or
patterns of files, which Git uses to filter files it should check-out.
Git expects a multi-line string on stdin ("newline-delimited list", see
`git-sparse-checkout(1)`), but within nixpkgs it is more consistent to
use a list of strings instead. The list elements are joined to a
multi-line string only before passing it to the builder script.
A deprecation warning is emitted if a (multi-line) string is passed to
`sparseCheckout`, but for the time being it is still accepted.
To the user running the docker image. If a Nix binary is available in
the resulting derivation, this then behaves like a single-user Nix
installation, except that already-written /nix/store paths can't be
changed. Most notably it makes Nix work not have to rely on a chroot
store in the image
PostScript Printer Description (ppd) files
describe printer features and capabilities.
They are usually evaluated by CUPS to convert
print jobs into a format suitable for a printer.
The conversion is often accomplished by commands
or even short shell scripts inside the ppd files.
ppd files are included in many printer driver packages.
Their scripts sometimes refer to other executables;
some of them are more common (like `perl`),
others are more exotic (like `rastertohp`).
If an executable is called with its name alone,
the effects of the ppd file depend on whether
the executable is in the PATH of CUPS,
and on the executable's version.
If an executable is called with an absolut path
(like `/usr/bin/perl`), it won't work at all in NixOS.
The commit at hand adds a setup hook that uses
the `fixupPhase` to substitute certain executable's
invocations in pdd files with absolute paths.
To use it, add the hook to `nativeBuildInputs` and
provide a list of executable names in `ppdFileCommands`.
Each executable must be available in the
package that is being built, or in `buildInputs`.
The setup hook's script then looks for ppd files in
`share/cups/model` and `share/ppds` in each output,
and replaces executable names with their absolute paths.
If ppd files need to be patched in unorthodox locations or
the setup hook needs to be invoked manually for other reasons,
one may leave the list `ppdFileCommands` empty to
avoid automatic processing of ppd files, then call
the shell function `patchPpdFileCommands` directly.
Details are described in the file `patch-ppd-hook.sh`.
Notes on the motivation for this setup hook:
Most packages in nixpkgs that provide
ppd files do not patch those ppd files at all.
This is not fatal when the executables are just called
with their names since the user can add packages
with the executables to `services.printing.drivers`.
E.g. if the user adds `pkgs.perl`, then all ppd
files that invoke `perl` will work as expected.
Nevertheless, to make these ppd files independent of
their execution environment, command invocations should
be substituted with absolut paths into the nix store.
This is similar to patching shebang lines so scripts can be
called independently of having the interpreter in the PATH.
The hook script in this commit is meant to support new packages
`foomatic-db*` which will generate several thousands of
ppd files referencing a plethora of different executables.
During development of these packages, I realized that
it's quite hard to patch ppd files in a robust way.
While binary names like `rastertokpsl` seem to be sufficiently
unique to be patched with `sed`, names like `date` or `gs`
are hard to patch without producing "false positives",
i.e., coincidental occurences of the executable's name that do
*not* refer to the executable and should not be patched at all.
As this problem also affects other packages,
it seems reasonable to put a robust implementation
in its own setup hook so that other
packages can use it without much effort.
Notes on the implementation:
The ppd file format is far from trivial.
The basic structure are key-value pairs;
keys may occur multiple times.
Only a small subset of keys may contain
executable names or shell scripts in their values.
Some values may span multiple lines;
a linebreak might even occur in the middle of a token.
Some executable names also occur in other keys by accident
where they must not be substituted (e.g. `gs` or `date`).
It is necessary to provide the list of command
names that will be patched for two reasons:
ppd files often contain "tokens" that might look
like commands (e.g. "file" or "host") but aren't;
these would erroneously get patched.
Also, looking for everything that might be a command
would slow down the patching process considerably.
The implementation uses `awk` to detect
keys that might contain executable names;
only their values are treated for substitution.
This avoids most cases of "overzealous" substitutions.
Since values may span multiple lines,
`sed` alone (while faster than `awk`) cannot focus
its substitution capabilities on relevant keys.
An elaborate set of regular expressions further helps
to minimize the probability of "false positives".
Several tricks are employed to speed up `awk`.
Notably, relevant files are identified with
`grep` before `awk` is applied to those files only.
Note that the script probably cannot handle fancy command
names (like spaces or backslashes as part of the name).
Also, there are still edge cases that the script would
mistakenly skip, e.g. if a shell script contains a
line break in the middle of an executable's name;
although ppd files permit such constellations,
I have yet to see one.
ppd files may be gzipped.
The setup hook accepts gzipped ppd files:
It decompresses them, substitutes paths, then recompresses them.
However, Nix cannot detect substituted paths as
runtime dependencies in compressed ppd files.
To ensure substituted paths are propagated as
runtime dependencies, the script adds each substituted
path to the variable `propagatedBuildInputs`.
Since this might not be enough for multi-output packages,
those paths are also written directly to
`nix-support/propagated-build-inputs`.
See the comment in `patch-ppd-hook.sh` for details.
Finally, the setup hook comes with a small test that
probes some edge cases with an artificial ppd file.
References:
* https://www.cups.org/doc/spec-ppd.html
* general ppd file specification
* lists some keys that may contain
executable names or shell scripts
* https://refspecs.linuxfoundation.org/LSB_4.0.0/LSB-Printing/LSB-Printing/ppdext.html
* lists some keys that may contain
executable names or shell scripts
* https://en.wikipedia.org/wiki/PostScript_Printer_Description#CUPS
* lists the usual locations of ppd files
There are two problems: first that we end up splitting on spaces in the
loop. Even when that is fixed, we still would split on spaces in the
`export` inside the loop. We need to guard against both.
Fixes#199298
Confirmed that it fixes the case mentioned in the ticket:
```console
[nix-develop]$ $(nix-build -I nixpkgs=/home/shana/programming/nixpkgs Cargo.nix -A rootCrate.build --no-out-link)/bin/nix-rustc-env-escape-repro
Expecting three words, got: first second third
```
I think this is going to cause a rebuild of every Rust package even if
they were unaffected, not much we can do here.
if `fetchSubmodules = false` to 'fetchFromGitLab' then theres the
following error
error: anonymous function at /nix/store/9m8drnpifyl5qsx93g6ll2xw6wkps03z-source/pkgs/build-support/fetchurl/default.nix:41:1 called with unexpected argument 'fetchSubmodules'
at /nix/store/9m8drnpifyl5qsx93g6ll2xw6wkps03z-source/pkgs/build-support/fetchzip/default.nix:36:1:
35|
36| fetchurl ((
| ^
37| if (pname != "" && version != "") then
When building a docker image using `dockertools.buildLayeredImage`, the
resulting image layers are passed to `jq` through the command line. When
building an image with too many layers this would exceed the maximum
command line argument length.
Hence, we store the list of layers in the Nix store and pass them to
`jq` as a file argument using `--slurpfile`.
Fixes#140908.
*Flags implies a list
slightly relevant:
> stdenv: start deprecating non-list configureFlags https://github.com/NixOS/nixpkgs/pull/173172
the makeInstalledTests function in `nixos/tests/installed-tests/default.nix` isn't available outside of nixpkgs so
it's not a breaking change
Tests from the bazelTestTargets argument will be run before the build. The new bazelTestFlags argument can be used to pass additional flags to this phase.
The source used to download a particular package still isn't
deterministic in nuget. Even worse, the hash of the package can vary
between sources. This makes nuget use the first enabled source
containing the package.
The order of the dependencies may be slightly different because it now
uses glob order of the lower-case package names and versions, instead of
sorting the output.
If the package actually downloaded was the first source that contains
the package, then it will be hashed from disk to avoid downloading it
again.
On darwin clang driver always sets -D_FORTIFY_SOURCE=0 under asan.
This causes -Werror to trip over macro redefinition:
<command line>:1:9: error: '_FORTIFY_SOURCE' macro redefined [-Werror,-Wmacro-redefined]
#define _FORTIFY_SOURCE 2
^
To avoid it let's always explicitly undefine it first before redefining.
Fixes the problem introduced by 12b3066aae
which caused nixos/release.nix to return the wrong attributes, while
intending to only affect nixos/lib's runTest.
This also removes callTest from the test options, because callTest is
only ever invoked by all-tests.nix.
There are the following issues with the current implementation:
* `fetchurl` with a tarball from GitHub appears to break occasionally
because the tarballs are not necessarily reproducible. Because of
that, `fetchFromGitHub` unpacks the tarball already because the
contents are actually reproducible in contrast to the tarball. To have
the same behavior here, we use `fetchzip` now (and `applyPatches` on
top to apply additional patches if needed).
* Fixes the way how patches are applied. Previously, when having patches
for a git checkout of the app, these wouldn't apply because the
`appname-version` prefix is missing.
* Because all old hashes are broken with this, I added an evaluation
check that breaks evaluation when using the old API (i.e. with
`name`/`version` which are not needed anymore).
This breaks the builder when a nix-shell or keepBuildTree is used. The
issue occurs because paths to cargo lockfiles are read with NIX_BUILD_TOP,
which is not reliable.
This breaks a nix-shell because NIX_BUILD_TOP simply is not set, causing
an invalid path to be used. This can be worked around using
NIX_BUILD_TOP=$PWD, but that obviously is not great.
This breaks keepBuildTree because it changes the working directory to a
different path than NIX_BUILD_TOP. Since the lockfiles are copied based
on the working directory, but read based on NIX_BUILD_TOP, this causes
the hook to not be able to find them.
This was solved by both reading these files based on the working directory,
using absolute paths to avoid having to traverse back in the directory tree.
Fixes: #138554
see https://systemd.io/PORTABLE_SERVICES/ about the definition of
portable services. this tooling is analogous to the `pkgs.dockerTools.buildImage`
tooling and is called `pkgs.portableService`.
systemd (since version 239) supports a concept of “Portable Services”.
“Portable Services” are a delivery method for system services that uses
two specific features of container management:
* Applications are bundled. I.e. multiple services, their binaries and all
their dependencies are packaged in an image, and are run directly from it.
* Stricter default security policies, i.e. sandboxing of applications.
The primary tool for interacting with Portable Services is portablectl,
and they are managed by the systemd-portabled service.
This function will create a squashfs raw image in `result/$pname_$version.raw`
that has the required files by the portable services spec, and all the
dependencies for the running program in the nix store.
There is context here that I needed when resolving an issue in which
libc was added to NIX_CFLAGS_COMPILE before the C++ stdlib that took
me awhile to understand.
It was suggested to me that this context be included as a comment,
since it is not obvious and could help others in the future.
Previously we had an assert that would complain when nugetDeps wasnt set,
which also didnt allow any passthru attributes (like fetch-deps) to be
build. That causes a cycle where you need nugetDeps to fetch the nuget
deps, but arent able to build the script to do so.
For example, this script doesn't work for `xivlauncher` because its
proper `pname` is `XIVLauncher`, and `mktemp` complains about "too few
X's":
$ mktemp -td "XXXXXX-XIVLauncher-home"
mktemp: too few X's in template ‘XXXXXX-XIVLauncher-home’
vs
$ mktemp -td "XIVLauncher-home-XXXXXX"
/tmp/XIVLauncher-home-EdGMei
This makes buildDotnetModule restore nuget dependencies for the
platforms set in meta.platforms. This should help with generating
lockfiles for platforms other than the host machine.
Co-authored-by: mdarocha <git@mdarocha.pl>
Rust binaries are unconditionally linked to libiconv on Darwin (see https://github.com/rust-lang/libc/issues/2870). We already add it as a dependency in `buildRustPackage`, so let's go a step further and propagate it.
`overrideCoqDerivation` allows end-users the ability to easily override
arguments to the underlying call to `mkCoqDerivation` for a given Coq
library.
This is similar to `haskell.lib.overrideCabal` for Haskell packages and
`.overridePythonAttrs` for Python packges.
The check script needs to run at build time. Add a new argument to
makePythonWriter for the appropriate buildPackages version of pythonPackages,
and use this to run the check script.
Nix counts any occurrence of a store path's *hash* as a reference, even
without a store directory prefix. The current version only kills
references of the form `/nix/store/<hash>-`, which can fail e.g. for
compressed files.
`toTargetArch` in `pkgs/build-support/rust/lib/default.nix` is used to
set `CARGO_CFG_TARGET_ARCH`. This environment variable is supposed to
be the `<arch>` portion of an LLVM-style platform name:
```
<arch><sub>-<kernel>-<libc><abi>
```
Note that the pointer-width (the "64" in "x86_64" and "mips64") is
part of `<arch>`, but the endianness (the `_be` in `aarch64_be`) is
*not*.
Unfortunately at the moment nixpkgs' parsed `cpuType` has no way to
query for the three subparts (name, pointer-width, and
subarch/endianness), nor any way to ask for just the first two parts.
For now, this commit simply fixes the problem in the two cases that
matter: `mips64el` and `powerpc64le`, which I believe are the only two
platforms supported by both rust and nixpkgs which have a
"subarchitecture".
cp on macOS doesn't support the -T flag, which causes the fetch-deps
script to fail. Use Nix's coreutils to ensure the script works
consistently across all platforms.
cp on macOS doesn't support the -T flag, which causes the fetch-deps
script to fail. Appending `/.` to the source argument replicates the
same functionality.
Fixes#186752. This adds buildVMMemorySize (defaults to 512 MiB) to
buildImage, which is passed to vm.runInLinuxVM. This is needed for
larger base images, which may otherwise cause container build failures
due to OOM in the VM.
Tell rust if we want our binaries linked statically or dynamically.
Otherwise the compiler will always produce statically linked binaries for musl
targets, as this is the default.
One significant use case is adding `passthru.tests` to setup-hooks,
and help increase test coverage for mission-critical setup-hooks.
As `meta`, `passthru` doesn't go into the build script directly.
However, passing an empty set to `passthru` breaks nixpkgs-review
and OfBorg tests, so pass it only when specified.
Some packages are defined by the build proccess, and change every time
the dotnet-sdk package changes. To avoid having to regenerate every
dependant packages dependencies every dotnet update, this moves these
packages into the `dotnet-sdk` `passthru` attribute, and includes them
every time `buildDotnetModule` is used.
Before the change separate-debug-info.sh did the stripping itself.
This scheme has a few problems:
1. Stripping happens only on ELF files. *.a and *.o files are skipped.
Derivations have to do it manually. Usually incorrectly
as they don't run $RANLIB (true for `glibc` and `musl`).
2. Stripping happens on all paths. Ideally only `stripDebugList` paths
should be considered.
3. Host strip is called on Target files.
This change offloads stripping logic to strip hook. This strips more
files for `glibc` and `musl`. Now we can remove most $STRIP calls
from individual derivations.
Co-authored-by: Sandro <sandro.jaeckel@gmail.com>
The initial intent was to strip .a and .o files, not .a.o files.
While at it expanded stripping for $lib output as well.
Without the change `libgcc.a` was not stripped and `.debug*` sections
made into final binaries. It's not a problem on it's own, but it's an
unintended side-effect. Noticed on `crystal_1_0` test failure where
`crystal` was not able to handle `dwarf-5`.
While at it allowed absolute file names to be passed to stripDebugList
and friends.
The --self-contained and --no-self-contained switches were
added to the dotnet build command starting with .NET 6.
The switch is equivalent to the setting the SelfContained
property, so we use the property for backwards compatibility.
Now the tool will only strip binaries if a strip executable is passed
via the STRIP environment variable. This is exposed via the strip
option for makeInitrdNG and the NixOS option boot.initrd.systemd.strip.
We are replicating one mechanism behind `-Z build-std`.
There isn't yet crate2nix support for this, but one can (and I do) add
the missing stdlib deps (for this feature to pick up) with overrides.
POSIX sh (and `bash`) impose a restriction on environment variable name
format and disallow hypheps in the names. Normally it's not a problem
as nothing usually tries to refer nyphenated names.
One exception is `nix develop` (https://github.com/NixOS/nix/issues/6848):
$ nix develop -f. gcc -L
gcc-wrapper> ...-get-env.sh: line 70: expand-response-params: bad substitution
Note that bash usually uses explicitly created `expandResponseParams`
variant of the same variable.
To work the problem around let's avoid environment variable export and move
it to `passthru` for `cc` (used ina few places) and remove it completely for
`binutils` (does not seem to be used at all).
A full check would be more complicated to write -
and more importantly - probably also more expensive.
Motivation: eval-time catch for errors like in commit 8198636be0.
'strip' does not normally preserve archive index in .a files.
This usually causes linking failures against static libs like:
$ nix build --no-link -f. pkgsCross.mingw32.re2c
> ...-i686-w64-mingw32-binutils-2.38/bin/i686-w64-mingw32-ld:
/nix/store/...-i686-w64-mingw32-stage-final-gcc-13.0.0-lib/i686-w64-mingw32/lib/libstdc++.dll.a:
error adding symbols: archive has no index; run ranlib to add one
We restore the index by running ranlib explicitly.
This change mimics existing strip{All,Debug}List variables to
allow special stripping directories just for Target.
The primary use case in mind is gcc where package has to install
both host and target ELFs. They have to be stripped by their own
strip tools accordingly.
Co-authored-by: Rick van Schijndel <Mindavi@users.noreply.github.com>
Co-authored-by: Sandro <sandro.jaeckel@gmail.com>
In some cases `$pkgs_src` can be a path. For example with `FSharp.Core` when it comes with dotnet SDK.
In these cases we need to fallback on default URL otherwise curl fails.
Without this change cross-built gcc fails to detect stack protector style:
$ nix log -f pkgs/stdenv/linux/make-bootstrap-tools-cross.nix powerpc64le.bootGCC | fgrep __stack_chk_fail
checking __stack_chk_fail in target C library... no
checking __stack_chk_fail in target C library... no
It happens because gcc treats search paths differently:
https://gcc.gnu.org/git/?p=gcc.git;a=blob;f=gcc/configure.ac;h=446747311a6aec3c810ad6aa4190f7bd383b94f7;hb=HEAD#l2458
if test x$host != x$target || test "x$TARGET_SYSTEM_ROOT" != x ||
test x$build != x$host || test "x$with_build_sysroot" != x; then
...
if test "x$with_build_sysroot" != "x"; then
target_header_dir="${with_build_sysroot}${native_system_header_dir}"
elif test "x$with_sysroot" = x; then
target_header_dir="${test_exec_prefix}/${target_noncanonical}/sys-include"
elif test "x$with_sysroot" = xyes; then
target_header_dir="${test_exec_prefix}/${target_noncanonical}/sys-root${native_system_header_dir}"
else
target_header_dir="${with_sysroot}${native_system_header_dir}"
fi
else
target_header_dir=${native_system_header_dir}
fi
By passing --with-build-sysroot=/ we trick cross-case to use
`target_header_dir="${with_sysroot}${native_system_header_dir}"`
which makes it equivalent to non-cross
`target_header_dir="${with_build_sysroot}${native_system_header_dir}"`
Tested the following setups:
- cross-compiler without libc headers (powerpc64le-static)
- cross-compiler with libc headers (powerpc64le-debug)
- cross-build compiler with libc headers (powerpc64le bootstrapTools)
Before the change only 2 of 3 compilers detected libc headers.
After the change all 3 compilers detected libc headers.
For darwin we silently ignore '-syslibroot //' argument as it does not
introduce impurities.
While at it dropped mingw special case for no-libc build. Before the change
we passed both '--without-headers --with-native-system-headers-dir' for
no-libc gcc-static builds. This tricked darwin builds to find sys/sdt.h
and fail inhibid_libc builds. Now all targets avoid passing native headers
for gcc-static builds.
While at it fixed correct headers passing to
--with-native-system-headers-dir= in host != target case: we were passing
host's headers where intention was to pass target's headers.
Noticed the mismatch as a build failure on pkgsCross.powernv.stdenv.cc
on darwin where `sys/sdt.h` is present in host's headers (libSystem)
but not target's headers (`glibc`).
Co-authored-by: Adam Joseph <54836058+amjoseph-nixpkgs@users.noreply.github.com>
Since 1ac53985 "*-wrapper; Switch from `infixSalt` to `suffixSalt`"
(2020) 'TARGET_' prefix (and infix) is no more. '_FOR_TARGET' suffix
is the only used suffix for target-specific tools and flags.
Use that in stip instead of always-empty variable.
this shouldn't change any binary available in the default build environment
because bintools-unwrapped is already in path ( idk where it comes from but i know because objcopy is in path but not in the wrapper )
this just makes all the binaries available under 'bintools' instead of
having to use 'bintools-unwrapped'
reduces confusion because now 'objcopy' and others will be in 'bintools'
A function to generate pkg-config files for Nix packages that need to create them ad hoc,
like blas and lapack.
Inspiration taken from `makeDesktopItem`.
Currently when cross compiling the `buildPackages.libredirect` has the wrong dynamic library extension.
To reproduce the issue run something like:
```
file $(nix-build -A pkgsCross.mingwW64.buildPackages.libredirect)/lib/libredirect.dll
/nix/store/80llmqa9lkabg3qnmglngzz22fwf739q-libredirect-0/lib/libredirect.dll: Mach-O 64-bit dynamically linked shared library x86_64
```
or
```
nix-diff $(nix-instantiate -A libredirect) $(nix-instantiate -A pkgsCross.mingwW64.buildPackages.libredirect)
```
The comment suggested that "{foo,bar}" is a supported pattern, which
is not true. "{foo,bar}" is only understood by brace expansion but the
code performs only globbing. We replace the comment with "[abc]",
which is a correct example of globbing.
By default, Cargo will only enable line tables. -g enables full debug
info. The RUSTFLAGS environment variable is examined by Cargo,
similar to how the NIX_*FLAGS* variables are examined by our compiler
wrappers.
Before this change `srcOnly git` gives:
duplicate derivation output 'debug', at pkgs/stdenv/generic/make-derivation.nix:270:7
This was because separateDebugInfo = true was passed on to the srcOnly
mkDerivation as well as the outputs list _including_ the debug output.
Luckily we don't need to untangle this mess since srcOnly is only
supposed to have a single output anyways.
Transform exit handlers of the form
trap cleanup EXIT [INT] [TERM] [QUIT] [HUP] [ERR]
(where cleanup is idempotent)
to
trap cleanup EXIT
This fixes a common bash antipattern.
Each of the above signals causes the script to exit. For each signal,
bash first handles the signal by running `cleanup` and then runs
`cleanup` again when handling EXIT.
(Exception: `vscode/*` prevents the second run of `cleanup` by removing
the trap in cleanup`).
Simplify the cleanup logic by just trapping exit, which is always run
when the script exits due to any of the above signals.
Note: In case of borgbackup, the exit handler is not idempotent, but just
trapping EXIT guarantees that it's only run once.
Some haskell code starts silently hanging when not built with a
threaded runtime, so let’s assume people using `writeHaskell` don’t
care about micro-optimizations like this and do the expected thing.
Some architectures don’t support a threaded runtime, for these we
provide the `threadedRuntime` option to turn it off (it should fail at
build time in that case, easy to detect).
If somebody already passed `"-threaded"` before via ghcArgs, this
will not add the flag a second time. Thus it’s backward-compatible in
this regard.
I tested out both branches (with `-threaded` set and not set before),
on an example I had where the runtime would hang when not compiled
with `-threaded`.
Sometimes I want to pass a different implementation of `mkNugetDeps`.
For example in private repos, it can be handy to use `__noChroot = true`
and bypass the deps.nix generation altogether. Or some Nuget packages
ship with ELF binaries that need to be patched, and that's best done as
soon as possible.
If the package was not restored from nuget.org (determinted by checking
the "source" field of ".nupkg.metadata"), query the custom source for
the package endpoint (the way nuget api is built we can't determine it
without an API query) and build a custom package URL to save in the
generated deps file.