The Everthing module is not part of a library and should therefore
not be copied to the nix store.
This is particularly bad, if the Everything module is defined in
an agda library included directory, e.g. consider an agda-lib with
include: .
and Everything.agda in the project root (.), in which case the
Everything module would become part of the library.
If multiple such projects are in the dependency tree, the Everything
module becomes ambiguous and the build would fail.
The `platform` field is pointless nesting: it's just stuff that happens
to be defined together, and that should be an implementation detail.
This instead makes `linux-kernel` and `gcc` top level fields in platform
configs. They join `rustc` there [all are optional], which was put there
and not in `platform` in anticipation of a change like this.
`linux-kernel.arch` in particular also becomes `linuxArch`, to match the
other `*Arch`es.
The next step after is this to combine the *specific* machines from
`lib.systems.platforms` with `lib.systems.examples`, keeping just the
"multiplatform" ones for defaulting.
continuation of #109595
pkgconfig was aliased in 2018, however, it remained in
all-packages.nix due to its wide usage. This cleans
up the remaining references to pkgs.pkgsconfig and
moves the entry to aliases.nix.
python3Packages.pkgconfig remained unchanged because
it's the canonical name of the upstream package
on pypi.
Since fdf32154fc, we no longer allow
missing modules in the initrd. Unfortunately since before this commit,
the modules-closure script would also fail on missing firmware, which
is a very common case (e.g. xhci-pci.ko.xz lists renesas_usb_fw.mem as
dependent firmware). Fix this by only issuing a warning instead.
This was already fixed on non-Darwin, but the fix missed that it was
also reintroduced for the Darwin code path at the same time.
Fixes: dd5d2482c9 ("emacs: Fix accidental double wrapping")
Warning about future breaking changes is wrong.
- It suggests that the maintainers don't value backwards compatibility.
They do.
- It implies that other parts of Nixpkgs won't ever break. They will.
- It implies that a well-defined "public" interface exists. It doesn't.
- If the reasons above didn't apply, it should have been in the manual
instead.
Breaking changes will come, especially to the interface. That can be the
only way we can make progress without breaking the image _contents_.
I don't think dockerTools is any different from most of Nixpkgs in
these regards.
`buildRustPackage` currently accepts `cargoSha256` as a hash for
vendored dependencies. This change adds `cargoHash` which accepts SRI
hashes, setting `outputHashAlgo` to `null`.
The hash mismatch message still uses `cargoSha256` as an example,
which it probably should until we completely switch to SRI hashes.
By default, Perl versions since 5.8.1 use randomization to make hashes
resistant to complexity attacks.
That randomization makes building VM images such as ubuntu1804x86_64
non-deterministic because the (imported) derivations built by
deb/deb-closure.pl are not stable.
This can easily be observed by repeating the following sequence of
commands and noting the path of the image's .drv:
nix-instantiate -E '(import <nixpkgs> {}).vmTools.diskImageFuns.ubuntu1804x86_64 {}'
nix-store --delete /nix/store/*ubuntu-18.04-bionic-amd64.nix
One source of non-determinism is the handling of Provides/Replaces,
which depends on the order of iteration over %packages. Here is a
diff showing the corresponding change in output:
>>> awk
-virtual awk: using original-awk
- original-awk: libc6 (>= 2.14)
+virtual awk: using mawk
+ mawk: libc6 (>= 2.14)
- mawk: libc6 (>= 2.14)
->>> libc6
This patch sorts packages by name for Provides/Replaces processing,
which seems to result in stable output.
(If the above turns out not to be sufficient, one could also set the
PERL_HASH_SEED and PERL_PERTURB_KEYS environment variables, documented
in 'perlrun', to disable Perl's built-in randomization. Complexity
attacks are not an issue as we control and trust all inputs.)
Using the full store hash as the random seed occasionally caused
reference cycles when the invocation was stored in output artifacts.
For example, cross-compiled gcc was failing due to this:
https://hydra.nixos.org/eval/1631713#tabs-now-fail
Simply truncating the hash is sufficient to avoid this.
When invoking a simple Ada program with `gcc` from `gnats10`, the
following warnings are shown:
```
$ gcc -c conftest.adb
gnat1: warning: command-line option ‘-Wformat=1’ is valid for C/C++/ObjC/ObjC++ but not for Ada
gnat1: warning: command-line option ‘-Wformat-security’ is valid for C/C++/ObjC/ObjC++ but not for Ada
gnat1: warning: ‘-Werror=’ argument ‘-Werror=format-security’ is not valid for Ada
$ echo $?
0
```
This is only spammy when compiling Ada programs inside a Nix derivation,
but certain configure scripts (such as the ./configure script from the
gcc that's built by coreboot's `make crossgcc` command) fail entirely
when getting that warning output.
https://nixos.wiki/wiki/Coreboot currently suggests manually running
> NIX_HARDENING_ENABLE="${NIX_HARDENING_ENABLE/ format/}" make crossgcc
… but actually teaching the nixpkgs-provided cc wrapper that `format`
isn't supported as a hardening flag seems to be the more canonical way
to do this in nixpgks.
After this, Ada programs still compile:
```
$ gcc -c conftest.adb
$ echo $?
0
```
And the compiler output is empty.
As @lopsided98 points out in #105305, since the hashes are now target
sensative, and until we find reason to actually care to test what they
are exactly, we are best just normalizing them away in the tests.
- Generate a link to the initramfs file with an appropriate file
extension, guessed based on the compressor by default
- Use correct metadata in u-boot images if generated, up to now this
was hardcoded to gzip and would silently generate an erroneous image
if another compressor was specified
- Document all the parameters
- Improve cross-building compatibility, by allowing passing either a
string as before, or a function taking a package set and returning the
path to a compressor in the "compressor" argument of the
function.
- Support more compression algorithms
- Place compressor executable function and arguments in passthru, for
reuse when appending initramfses
Co-Authored-By: Dominik Xaver Hörl <hoe.dom@gmx.de>
Originally this was meant to support other Windows versions than just
Windows XP, but before I actually got a chance to implement this I left
the project that I implemented this for.
The code has been broken for years now and I highly doubt anyone is
interested in resurrecting this (including me), so in order to make this
less of a maintenance burden for everybody, let's remove it.
Signed-off-by: aszlig <aszlig@nix.build>
Docker (via containerd) and the the OCI Image Configuration imply and
suggest, respectfully, that the architecture set in images matches those
of GOARCH in the Go Language document.
This changeset updates the implimentation of getArch in dockerTools to
return GOARCH values, to satisfy Docker.
Fixes: #106695
If I'm running an Emacs executable from emacsWithPackages as my main
programming environment, and I'm hacking on Emacs, or the Emacs
packaging in Nixpkgs, or whatever, I don't want the Emacs packages
from the wrapper to show up in the load path of that child Emacs. It
results in differing behaviour depending on whether the child Emacs is
run from Emacs or from, for example, an external terminal emulator,
which is very surprising.
To avoid this, pass another environment variable containing the
wrapper site-lisp path, and use that value to remove the corresponding
entry in EMACSLOADPATH, so it won't be propagated to child Emacsen.
An empty entry in EMACSLOADPATH gets filled with the default value.
This is presumably why the wrapper inserted a colon after the entry it
added for the dependencies. But this naive approach wasn't always
correct.
For example, if the user ran emacs with EMACSLOADPATH=foo, the wrapper
would insert the default value (by adding the trailing `:') even
though the user was trying to expressly opt out of it.
To do this correctly, here I've replaced makeWrapper with a bespoke
script that will actually parse the EMACSLOADPATH provided in the
environment (if given), and insert the wrapper's load path just before
the default value. If EMACSLOADPATH is given but contains no default
value, we respect that and don't add the wrapped dependencies at all.
If no EMACSLOADPATH is given, we insert the wrapped dependencies
before the default value, just like before. In this way, the wrapped
Emacs should now behave as if the wrapped dependencies were part of
Emacs's default load-path value.
Previously, meta wasn't being passed through at all, because it's
removed from args without being used anywhere. This made it so that
rcirc-menu wasn't being marked as broken even though it was supposed
to be.
This patch copies the meta handling from melpaBuild, including the
default home page (adapted for ELPA).
Use "find -exec" to strip rather than "find … | xargs …". The former
ensures that stripping is attempted for each file, whereas the latter
will stop stripping at the first failure. Unstripped files can fool
runtime dependency detection and bloat closure sizes.
There are a few operations in this library that naively runs on every
iteration while they could be cached.
For a simple test repository with a small number of files and ~1000
gitignore patterns this brings memory usage down from ~233M to ~157M
and wall time from 2.6s down to 0.78s.
This should scale similarly with the number of files in a repository.
For example graphviz has chained symlinked manpages: dot2gxl.1 is
a symlink to gv2gxl.1 which is a symlink to gxl2gv.1
The second loop replaces each non-compressed symlink to a compressed
symlink. The target is determined with 'readlink -f', which follows
links recursively until the first name that is not a link (so either
the 'target name' or the first 'dangling' symlink).
This means that if the loop converted dot2gxl.1 before converting
gv2gxl.1 it would add a symlink `dot2gxl.1.gz->gxl2gv.1.gz`. When
it converted gv2gxl.1 first, it would then add a
`dot2gxl.1.gz->gv2gxl.1.gz` symlink.
Both are 'correct', but it's weird the result depends on the order
in which 'find' returns the files. This PR makes the behaviour
deterministic.
fixes#104708
This is a workaround for NixOS/nix#4295, which caused single-user Linux
Nix installations using sandboxed builds to start failing to build
fetchzip derivations after 4a5c49363a.
In short: removing write permissions for the entire directory is great,
except we then can't rename(2) it to the final Nix store path out of the
sandbox, because we don't have write permission on the directory and
thus cannot update the ".." directory entry.
Bofore this change, NUM_JOBS was set to 1. Some crates for building
C/C++ code (e.g. the cc and cmake crates), rely on this variable to
set the number of jobs. As a consequence, we were compiling embedded
libraries serially. Change this to NIX_BUILD_CORES to permit parallel
builds.
Prior discussion:
https://github.com/NixOS/nixpkgs/pull/50452#issuecomment-439407547
This provides a /etc/passwd and /etc/group that contain root and nobody.
Useful when packaging binaries that insist on using nss to look up
username/groups (like nginx).
The current nginx example used the `runAsRoot` parameter to setup
/etc/group and /etc/passwd (which also doesn't exist in
buildLayeredImage), so we can now just use fakeNss there and use
buildLayeredImage.
Informational messages belong on stderr, not on stdout and intermixed
with structured output for programmatic use.
Change-Id: I34d094d04460494e9ec8953db7490f4e2292d959
The dynamic loader on powerpc64 is called ld64.so.2 rather than
ld-linux.so.*, and was not matched by the existing pattern.
We reuse the dynamicLinker name from binutils to match a wider set
of platforms and to avoid specifying this information in two places.
The chroot environment under mnt had /dev and /sys via bind mounts,
but nothing setting up /proc. The `--mount-proc` argument to unshare
defaults to /proc, which is outside of the chroot envirnoment.
This adds -frandom-seed to each compiler invocation in stdenv. The
object here is to make the compierl invocations produce the same output
every time they are called (for the same derivation). When the
-frandom-seed option is not set the compiler will use a combination of
random numbers (in GCC's case from /dev/urandom) and the durrent time to
produce a "random" input per file. This can (among other things) lead to
different ordering of symbols in the produced object files.
For reason of reproducibility we prefer having the same derivation
produce the exact same outputs. This is not a silver bullet but one way
to tame the compiler.
I made a mistake merge. Reverting it in c778945806 undid the state
on master, but now I realize it crippled the git merge mechanism.
As the merge contained a mix of commits from `master..staging-next`
and other commits from `staging-next..staging`, it got the
`staging-next` branch into a state that was difficult to recover.
I reconstructed the "desired" state of staging-next tree by:
- checking out the last commit of the problematic range: 4effe769e2
- `git rebase -i --preserve-merges a8a018ddc0` - dropping the mistaken
merge commit and its revert from that range (while keeping
reapplication from 4effe769e2)
- merging the last unaffected staging-next commit (803ca85c20)
- fortunately no other commits have been pushed to staging-next yet
- applying a diff on staging-next to get it into that state
Teach installShellCompletion how to install completions from a named
pipe. Also add a convenience flag `--cmd NAME` that synthesizes the name
for each completion instead of requiring repeated `--name` flags.
Usage looks something like
installShellCompletion --cmd foobar \
--bash <($out/bin/foobar --bash-completion) \
--fish <($out/bin/foobar --fish-completion) \
--zsh <($out/bin/foobar --zsh-completion)
Fixes#83284
This hook moves systemd user service file from `lib/systemd/user` to
`share/systemd/user`. This is to allow systemd to find the user
services when installed into a user profile. The `lib/systemd/user`
path does not work since `lib` is not in `XDG_DATA_DIRS`.
Revert the hack and the original faulty commit.
This reverts commit 48264ee506.
Revert "Purity checking should accept $TMP and not just /tmp"
This reverts commit fb777be7d2.
This adds an option to skip adding --copt and --linkopt to Bazel
flags. In some cases, Bazel doesn’t like these flags, like when a
custom toolchain is being used (as opposed to the builtin one. The
compiler can still get the dependencies since it is invoked through
gcc wrapper and picks up the NIX_CFLAGS_COMPILE / NIX_LDFLAGS.
This enables short argument attrsets similar to fetchPypi:
src = fetchCrate {
inherit pname version;
sha256 = "02h8pikmk19ziqw9jgxxf7kjhnb3792vz9is446p1xfvlh4mzmyx";
};
Before this commit, the firmware information would be loaded from the
currently running kernel, not from the kernel to be loaded.
This commit ensures the correct kernel version and modules are read.
After a recent upgrade of modinfo, its output is now incorrect for
builtin modules. This commit filters out the output until a fix is made
available upstream
symlinkJoin can break (silently) when the passed paths contain symlinks
to directories. This should work now.
Down-side: when lib/tmpfiles.d doesn't exist for some passed package,
the error message is a little less explicit, because we never get
to the postBuild phase (and symlinkJoin doesn't provide a better way):
/nix/store/HASH-NAME/lib/tmpfiles.d: No such file or directory
Also, it seemed pointless to create symlinks for whole package trees
and using only a part of the result (usually very small part).
* writers.makeScriptWriter: fix on Darwin\MacOS
On Darwin a script cannot be used as an interpreter in a shebang line, which
causes scripts produced with makeScriptWriter (and its derivatives) to fail at
run time if the used interpreter was wrapped with makeWrapper (as in the case
of python3.withPackages).
This commit fixes the problem by detecting if the interpreter is a script
and prepending its shebang to the final interpreter line.
For example if used interpreter is;
```
/nix/store/ynwv137n2650qy39swcflxbcygk5jwv1-python3-3.8.3-env/bin/python
```
which is a script with following shebang:
```
#! /nix/store/knd85yc7iwli8344ghav3zli8d9gril0-bash-4.4-p23/bin/bash -e
```
then the shebang line in the produced script will be
```
#! /nix/store/knd85yc7iwli8344ghav3zli8d9gril0-bash-4.4-p23/bin/bash -e /nix/store/ynwv137n2650qy39swcflxbcygk5jwv1-python3-3.8.3-env/bin/python
```
This works on Darwin since there does not seem to be a limit to the length
of the shabang line and the shebang lines support multiple arguments to
the interpreters (as opposed to linux where the kernel imposes a strict limit
on shebang lengh and everything following the interpreter is passed to it
as a single string).
fixes; #93609
related to: #65351#11133 (and probably a bunch of others)
NOTE: scripts produced on platforms other than Darwin will remain unmodified
by this PR. However it might worth considering extending this fix to BSD systems
in general. I didn't do it since I have no way of testing it on systems other
than MacOS and linux.
* writers.makeScriptWriter: fix typo in comment
* writers.makeScriptWriter: fail build if interpreter of interpreter is a script
"bazel fetch" will, by default, fetch everything that _might_ be used,
including things that will later be discarded due to the way the build
is configured.
Concretely, this means that for some builds of Java packages, this will
avoid failures where the builder tries to retrieve the JDK from /usr/share/java
(or equivalent).
This also means that for most packages we can fetch _fewer_ dependencies,
since the standard tree pruning for artifacts to fetch will take effect.
fetchConfigured is disabled by default since it changes the fetch hashes
of tensorflow/tensorflow2 (since it ends up fetching less).
The previous code discarded entire dependency trees if the first entry in the dependency list compiled by `modprobe --show-depends` is a builtin and otherwise handled its output in a rather hackish way.
While the artifacts from `buildPhase` should be used for testing as
well, it should be avoided that those are modified during `checkPhase`.
This can happen if a package is built e.g. with special
`cargoBuildFlags` that don't apply to the `checkPhase`. In that case, a
binary would be installed into `$out` without those flags since
`checkPhase` overrides the binary in the `target`-directory.
This patch copies the state of `target/release` into a temporary
location at the end of the `buildPhase` and installs the results from
that temporary directory into `$out` while `checkPhase` can continue
using the configured build-dir.
cc #91689Closes#93119Closes#91191
The image tag can be specified or generated from the output hash.
Previously, a generated tag could be recovered from the evaluated
image with some string operations.
However, with the introduction of streamLayeredImage, it's not
feasible to compute the generated tag yourself.
With this change, the imageTag attribute is set unconditionally,
for the buildImage, buildLayeredImage, streamLayeredImage functions.