This isn't really desirable in general, but given that Nix itself
currently relies on this behaviour and that we don't want to break
backwards compatibility we should support it for now, maybe
deprecating it in the future.
This change is prompted by the following, admittedly cursed tarball:
```
> curl https://registry.npmjs.org/char-regex/-/char-regex-1.0.2.tgz 2>/dev/null \
| tar -ztv
drw-rw-rw- 0/0 0 2020-02-18 10:50 package
-rw-rw-rw- 0/0 297 2020-02-18 10:50 package/index.d.ts
-rw-rw-rw- 0/0 1920 2020-02-18 10:50 package/index.js
-rw-rw-rw- 0/0 1092 2020-01-31 11:31 package/LICENSE
-rw-rw-rw- 0/0 937 2020-02-18 10:51 package/package.json
-rw-rw-rw- 0/0 713 2020-02-18 10:50 package/README.md
```
The minimal reproducer for the issue is the following derivation trying
to work around the uid 0 issue with `dontMakeSourcesWritable = true`:
```nix
{ stdenv, fetchurl }:
stdenv.mkDerivation {
name = "test";
src = fetchurl {
sha1 = "d744358226217f981ed58f479b1d6bcc29545dcf";
url = "https://registry.npmjs.org/char-regex/-/char-regex-1.0.2.tgz";
};
dontMakeSourcesWritable = true;
installPhase = ''
cp -R . $out
'';
}
```
This currently fails in the following way:
```
these derivations will be built:
/nix/store/pc3jbydl0xcc8nrndf5xkf7hdhpgpb41-test.drv
building '/nix/store/pc3jbydl0xcc8nrndf5xkf7hdhpgpb41-test.drv'...
unpacking sources
unpacking source archive /nix/store/v9p98kqplf4kflmy91p0687xlvr6klb1-char-regex-1.0.2.tgz
source root is package
find: 'package/index.d.ts': Permission denied
find: 'package/index.js': Permission denied
find: 'package/LICENSE': Permission denied
find: 'package/package.json': Permission denied
find: 'package/README.md': Permission denied
/nix/store/6c47azxacncswc1pllzj28zfzqw40d7c-stdenv-linux/setup: line 1311: cd: package: Permission denied
builder for '/nix/store/pc3jbydl0xcc8nrndf5xkf7hdhpgpb41-test.drv' failed with exit code 1
error: build of '/nix/store/pc3jbydl0xcc8nrndf5xkf7hdhpgpb41-test.drv' failed
```
As you can see, the issue is that `$sourceRoot` isn't executable,
prohibiting the call to `cd`. This can be fixed by running
`chmod +x "${sourceRoot}"` before `cd` regardless of
`dontMakeSourcesWritable` in `unpackPhase` since if `chmod` fails, `cd`
would fail as well and we are out of options.
Verified that the workaround works locally.
Another thing to investigate is investigating if we should use
`--no-same-owner` for `tar` and if it helps in this case as well.
See also <https://github.com/Profpatsch/yarn2nix/issues/56>.
Flake users that use a command like `nix build nixpkgs#hello` on a
broken/insecure package will not be able to use an environment variable
to override that behavior, unless they pass `--impure` to the command.
Co-authored-by: pkharvey <kayharvey@protonmail.com>
Add `shellDryRun` to the generic stdenv and substitute it for uses of
`${stdenv.shell} -n`. The point of this layer of abstraction is to add
the flag `-O extglob`, which resolves#126344 in a more direct way.
Packages that use libtool run it as a wrapper around the linker.
Before calling the linker, libtool will determine what libraries would
be linked, and check if there's a corresponding libtool
archive (libfoo.la) file in the same directory . This file
contains extra information about the library. This is especially
important for static linking, because static archives don't contain
dependency information, so we need libtool to use the .la files to
figure out which libraries actually need to be linked against.
But in Nixpkgs, this has never worked. libtool isn't able to find any
libraries, because only the compiler wrapper knows how to find them,
and the compiler wrapper is opaque to libtool. This is why
pkgsStatic.util-linuxMinimal doesn't build prior to this patch — it
depends on libpam, which depends on libaudit, and if libtool can't
find the .la file, nothing will tell the linker to also link against
libaudit when linking libpam. (It was previously possible to build a
static util-linux, because linux-pam only recently had the audit
dependency added.)
There are a couple of ways we could fix this, so that libtool knows
where to look for .la files.
* Set LD_LIBRARY_PATH/DYLD_LIBRARY_PATH/whatever, which libtool will
examine. This would have major side effects though, because the
dynamic linker looks at it too.
* Inject libtool scripts with the appropriate information. That's
what I've done here. It was the obvious choice because we're
already finding and modifying the libtool scripts, to remove paths
outside the Nix store that libtool might check in unsandboxed
builds. Instead of emptying out the system paths, we can
repopulate it with our own library paths.
(We can't use a wrapper like we do for other tools in Nixpkgs, because
libtool scripts are often distributed in source tarballs, so we can't
just add a wrapped version of libtool as a dependency. That's why
there's already the fixLibtool function in stdenv.)
With this change, libtool is able to discover .la files, and
pkgsStatic.util-linuxMinimal can build again, linking correctly
against libpam and libaudit.
This reverts commit 488395c0f8.
Currently, `nix print-dev-env` fails to execute if this function is present, because of its use of hex literals.
Until this issue (https://github.com/NixOS/nix/issues/5262) is solved, we should revert this to prevent breakage.
somehow `read -N 0` behavior changed in bash 5. `read -d ''` has identical behavior
the purpose of the function is to read stdin and exit 1 on a null byte (i.e. if stdin is the content of a binary)
(cherry picked from commit 5d0acf20f8)
The old stdenv adapters were subtly wrong in two ways:
- `overrideAttrs` leaked the original, unoverridden `mkDerivation`.
- `stdenv.override` would throw away any manually-set `mkDerivation`
from a stdenv reverting to the original.
Now, `mkDerivation` is controlled (nearly directly) via an argument, and
always correctly closes over the final ("self") stdenv. This means the
adapters can work entirely via `.override` without any manual `stdenv //
...`, and both those issues are fixed.
Note hashes are changed, because stdenvs no previously overridden like
`stdenvNoCC` and `crossLibcStdenv` now are. I had to add some
`dontDisableStatic = true` accordingly. The flip side however is that
since the overrides compose, we no longer need to override anything but
the default `stdenv` from which all the others are created.
When we "fix" libtool, we empty out its system library path to avoid
it discovering libraries in e.g. /usr when the sandbox is disabled.
But this also means that the checks libtool does to make sure it can
find the libraries its supposed to be linking to won't work. On Linux
and Darwin, this isn't a problem, because libtool doesn't actually
perform any checks, but it is on at least NetBSD and Cygwin[1].
So, we force libtool not to do these checks on any platform, bringing
the more exotic platforms into line with the existing behaviour on
Linux and Darwin.
Without this change, lots of library packages produce warnings like
this in their build output on the platforms with checks by default:
*** Warning: linker path does not have real file for library -lz.
*** I have the capability to make that library automatically link in when
*** you link to this library. But I can only do this if you have a
*** shared version of the library, which you do not appear to have
*** because I did check the linker path looking for a file starting
*** with libz but no candidates were found. (...for regex pattern test)
*** The inter-library dependencies that have been dropped here will be
*** automatically added whenever a program is linked with this library
*** or is declared to -dlopen it.
And dependent packages break because libtool doesn't link their
transitive dependencies. So making this change fixes _lots_ of
packages on those platforms.
[1]: https://git.savannah.gnu.org/cgit/libtool.git/tree/m4/libtool.m4?id=544fc0e2c2a03129a540aebef41ad32bfb5c06b8#n3445
This confused the hell out of me, as I didn't spot the
> The following flags are disabled by default ...
when reading about `pie`, because that sentence was hidden in the
previous hardening flag's section.
Also explain that `pie` hardening is on by default on musl.
Only bash 4+ works in setup.sh. To make sure this is obvious, we can
check BASH_VERSINFO to get the major version number of Bash.
While Bash 3 is pretty rare, it still comes stock in macOS.
We *could* provide a warning here for non-Bash shells, but it’s not
always clear whether they will work or not. Zsh should have no trouble
while busybox sh, fish, or any others. There’s no great way to detect
what feature set the shell supports.
Fixes#71625
Patch every `derivation` call in the bootsrap process to add it a
conditional `__contentAddressed` parameter.
That way, passing `contentAddressedByDefault` means that the entire
build closure of a system can be content addressed
Adding the hostSuffix to the end of the derivation's name is problematic
since some stuff, including user facing programs like nix-env rely on
the behavior of parseDrvName instead of pname and version.
builtins.parseDrvName currently thinks that the cross compilation target
added via hostSuffix is part of the version. This has the practical
consequence for example that nix-env would think a cross compiled
derivation would be an updated version of a native derivation of the
same package and version — breaking user's profiles.
We can easily prevent this by moving the hostSuffix in between pname and
version. In case name is passed to mkDerivation this is of course not
possible and we are forced to fall back to the old behavior.
This change could serve as a replacement for the migitation we
introduced with the -static appendix to pname in order to avoid
confusion between nix and nixStatic as outlined in the comment added
with this commit.
Support `mainProgram` as an attribute of `meta` for packages.
This is an attribute used by [`nix
run`](https://nixos.org/manual/nix/unstable/command-ref/new-cli/nix3-run.html#description)
to customize the main program of a package.
For example, `pkgs.neovim` provides a `/bin/nvim` executable which users
would (almost certainly) prefer `nix run` to execute instead of a
non-existing `/bin/neovim`.
Signed-off-by: Ana Hobden <operator@hoverbear.org>
`hasCC` was getting overridden in the cross bootstrapping (for GHCJS),
which preventing the default logic from re-triggering for `stdenvNoCC`.
Also remove `stdenv.noCC` which is obseleted by `stdenv.hasCC`.
Add a config field `contentAddressedByDefault` and an associated
environment variable `NIXPKGS_CA_BY_DEFAULT` to make every nixpkgs
derivation content-addressed by default
This was removed in e29b0da9c7, because
it was felt it was ambiguous whether isBSD should remove Darwin.
I think it should be reintroduced. Packages sometimes have their own
concepts of "is BSD" e.g. Lua, and these almost never include Darwin,
so let's keep Darwin excluded.
Without a way to say "is this BSD", one has to list all flavours of
BSD seperately, even though fundamentally they're still extremely
similar. I don't want to have to write the following!
stdenv.isFreeBSD || stdenv.isNetBSD || stdenv.isOpenBSD || stdenv.isDragonFlyBSD
Additionally, we've had stdenv.hostPlatform.isBSD this whole time, and
it hasn't hurt anything.
Unify the logic for constructing the name from pname and version and
modifying the name in case a host suffix needs to appended. This allows
us to modify the construction of name from pname and version without
having to duplicate it in two places.
4e9dc46dea re-enabled hardening for Musl,
which is good.
Though static builds for ARM fail in various ways
- cross armv7l static does not build
- cross aarch64 static produces segfaulting dynamically linked binaries
- native aarch64 static also produces segfaulting dynamically linked binaries
It seems that for native x86_64-linux, static builds are fine though.
This works around the issue by removing PIE from the hardening flags,
keeping all other hardening flags. This is an improvement (I think) from
before 4e9dc46d.
Fixes#114953
* stdenv/check-meta: change to allowlist and blocklist
* Update pkgs/stdenv/generic/check-meta.nix
Co-authored-by: Graham Christensen <graham@grahamc.com>
Since the deprecation is fairly recent, we should warn by default.
Also fix the wording of the comment: stdenv.lib will be removed for the 21.11
release, not just deprecated (as it already is deprecated).
The `platform` field is pointless nesting: it's just stuff that happens
to be defined together, and that should be an implementation detail.
This instead makes `linux-kernel` and `gcc` top level fields in platform
configs. They join `rustc` there [all are optional], which was put there
and not in `platform` in anticipation of a change like this.
`linux-kernel.arch` in particular also becomes `linuxArch`, to match the
other `*Arch`es.
The next step after is this to combine the *specific* machines from
`lib.systems.platforms` with `lib.systems.examples`, keeping just the
"multiplatform" ones for defaulting.
By exporting it, we always make the new directories available
to subprocesses, regardless of whether the environment
variable existed before `nix-shell` was invoked.
This avoids the scenario where strictDeps is off and cross-compiled
XDG_DATA_DIRS content is brought into the environment.
While probably harmless for data like manpages and completion scripts,
this would cause issues when XDG_DATA_DIRS is used to find executables
or plugins. The Qt framework is known to behave like this and might
have run into incompatibilities.
XDG_DATA_DIRS is to /share as PATH is to /bin.
It was defined as part of the XDG basedir specification.
https://specifications.freedesktop.org/basedir-spec/basedir-spec-latest.html
While it originated from the X Desktop Group, it is not limited to
the X11 ecosystem, as evidenced by its use in bash-completion.
The removal of ` && -d "$pkg/bin"` is ok, because this optimization is
already performed by `addToSearchPath`.
This adds -frandom-seed to each compiler invocation in stdenv. The
object here is to make the compierl invocations produce the same output
every time they are called (for the same derivation). When the
-frandom-seed option is not set the compiler will use a combination of
random numbers (in GCC's case from /dev/urandom) and the durrent time to
produce a "random" input per file. This can (among other things) lead to
different ordering of symbols in the produced object files.
For reason of reproducibility we prefer having the same derivation
produce the exact same outputs. This is not a silver bullet but one way
to tame the compiler.
defaultHardeningFlags is set to enable pie for Musl, but is not
actually used because the default is never put into
NIX_HARDENING_ENABLE. That still works for cases other than Musl
only because NIX_HARDENING_ENABLE is defaulted in the binutils and
cc-wrapper setup-hook.sh scripts.
This hook moves systemd user service file from `lib/systemd/user` to
`share/systemd/user`. This is to allow systemd to find the user
services when installed into a user profile. The `lib/systemd/user`
path does not work since `lib` is not in `XDG_DATA_DIRS`.
This introduces the .inputDerivation attribute on all derivations
created with mkDerivation. This is another derivation that can always
build successfully and whose runtime dependencies are the build time
dependencies of the original derivation.
This allows easy building and distributing of all derivations needed to
enter a nix-shell with
nix-build shell.nix -A inputDerivation
I hate the thing too even though I made it, and rather just get rid of
it. But we can't do that yet. In the meantime, this brings us more
inline with autoconf and will make it slightly easier for me to write a
pkg-config wrapper, which we need.
The cross file is added in the `mkDerivation`. It isn't nice putting
build tool-specific stuff here, but our current architecture gives us
little alternative.
Currently it's not possible to determine the reason why a package is
unavailable without evaluating nixpkgs multiple times with different
settings. eg.
nix-repl> :p android-studio.meta
{ available = false; broken = false; unfree = true; unsupported = true; ... }
The following snippet is an example that uses this information to query
the availability information of all packages in nixpkgs, giving an
overview of all the packages currently marked as broken, etc.
{ pkgs }:
with import <nixpkgs/lib>;
let
mapPkgs =
let mapPkgs' = path: f: mapAttrs (n: v:
let result = builtins.tryEval (v ? meta); in
if !result.success then {} else
if isDerivation v then f (path ++ [n]) v else
if isAttrs v && v.recurseForDerivations or false then mapPkgs' (path ++ [n]) f v else
{}
);
in mapPkgs' [];
getMeta = path: drv:
if drv.meta ? available then
let meta = {
pkg = concatStringsSep "." path;
inherit (drv.meta) broken unfree unsupported insecure;
};
in builtins.trace meta.pkg meta
else {};
metaToList = attrs: flatten (map (v: if v ? pkg then v else metaToList v) (attrValues attrs));
in metaToList (mapPkgs getMeta pkgs)
While looking at the graph of all the outputs in my personal binary
cache it became obvious that we have a lot of self references within the
package set. That isn't an isuse by itself. However it increases the
size of the binary cache for every (reproducible) build of a package
that carries references to itself. You can no longer deduplicate the
outputs since they are all unique. One of the ways to get rid of (a few)
references is to rewrite all the symlinks that are currently used to be
relative symlinks. Two build of something that didn't really change but
carries a self-reference can the be store as the same NAR file again.
I quickly hacked together this change to see if that would yield and
success. My bash scripting skills are probably not great but so far it
seem to somewhat work.
Before, we'd always use `cc = null`, and check for that. The problem is
this breaks for cross compilation to platforms that don't support a C
compiler.
It's a very subtle issue. One might think there is no problem because we
have `stdenvNoCC`, and presumably one would only build derivations that
use that. The problem is that one still wants to use tools at build-time
that are themselves built with a C compiler, and those are gotten via
"splicing". The runtime version of those deps will explode, but the
build time / `buildPackages` versions of those deps will be fine, and
splicing attempts to work this by using `builtins.tryEval` to filter out
any broken "higher priority" packages (runtime is the default and
highest priority) so that both `foo` and `foo.nativeDrv` works.
However, `tryEval` only catches certain evaluation failures (e.g.
exceptions), and not arbitrary failures (such as `cc.attr` when `cc` is
null). This means `tryEval` fails to let us use our build time deps, and
everything comes apart.
The right solution is, as usually, to get rid of splicing. Or, baring
that, to make it so `foo` never works and one has to explicitly do
`foo.*`. But that is a much larger change, and certaily one unsuitable
to be backported to stable.
Given that, we instead make an exception-throwing `cc` attribute, and
create a `hasCC` attribute for those derivations which wish to
condtionally use a C compiler: instead of doing `stdenv.cc or null ==
null` or something similar, one does `stdenv.hasCC`. This allows quering
without "tripping" the exception, while also allowing `tryEval` to work.
No platform without a C compiler is yet wired up by default. That will
be done in a following commit.
Rewrite the `stripHash` helper function with 2 differences:
* Paths starting with `--` will no longer produce an error.
* Use Bash string manipulation instead of shelling out to `grep` and
`cut`. This should be faster.
A bunch of stdenv-internal variables were deleted in
1601a7fcce, but these are needed in the
fixup phase, whereas the rest are just needed for the initial work
(findInputs, etc) before the user phases.
CC @matthewbauer
There were two issues:
* builtins.getEnv was called deep into the nixpkgs tree making it hard
to discover. This is solved by moving the call into
pkgs/top-level/impure.nix
* when the config was explicitly set by the user to false, it would
still try and load the environment variable. This meant that it was
not possible to guarantee the same outcome on two different systems.
Before, we very carefully unapplied and reapplied `set -u` so the rest
of Nixpkgs could continue to not fail on undefined variables. Let's rip
off the band-aid.
A package's meta.license can either be a single license or a list. The
code to check config.whitelistedLicenses and config.blackListedLicenses
wasn't handling this, nor was the showLicense function.
These can be used to determine whether a ELF file with ELF header is an
executable or shared library.
We can't implement it in pure bash, as bash has problems with null
bytes.
That's very much consistent with the spirit of nix-shell --pure
BTW, nix 1.x shells will be always treated as pure;
in that version detection isn't possible.
https://github.com/NixOS/nix/commit/1bffd83e1a9c
Some SSL libs don't react to $SSL_CERT_FILE.
That actually makes sense to me, as we add this behavior
as nixpkgs-specific, so it seems "safer" to use $NIX_*.
We want initialPath to have lowest precedence.
In addition, unset _PATH and _HOST_PATH as they shouldn’t be needed
after final PATH and HOST_PATH are set.
Adds pkgsCross.wasm32 and pkgsCross.wasm64. Use it to build Nixpkgs
with a WebAssembly toolchain.
stdenv/cross: use static overlay on isWasm
isWasm doesn’t make sense dynamically linked.
This puts patches in all derivations even if it unspecified by the
derivation. By default it will be an empty list. This simplifies
overrides, as we can now assume that patches is a valid name so that
this works:
self: super: {
mypkg = super.pkg.overrideAttrs (o: {
patches = o.patches ++ [ ./my-very-own.patch ];
});
}
That is, you don’t need to provide a default "or []", make-derivation
provides one for you.
Unfortunately, this is a mass rebuild.
You can build (partially) with LLVM toolchain using the useLLVM flag.
This works like so:
nix-build -A hello --arg crossSystem '{ system =
"aarch64-unknown-linux-musl"; useLLVM = true }'
also don’t separate debug info in lldClang
It doesn’t work currently with that setup hook. Missing build-id?
Comments on conflicts:
- llvm: d6f401e1 vs. 469ecc70 - docs for 6 and 7 say the default is
to build all targets, so we should be fine
- some pypi hashes: they were equivalent, just base16 vs. base32
For a long time now, tracing has been broken in Nixpkgs. So when you
have an eval error you would get something like this:
error: while evaluating the attribute 'buildInputs' of the derivation 'hello-2.10' at /home/mbauer/nixpkgs/pkgs/stdenv/generic/make-derivation.nix:185:11:
while evaluating 'chooseDevOutputs' at /home/mbauer/nixpkgs/lib/attrsets.nix:474:22, called from undefined position:
while evaluating 'optionals' at /home/mbauer/nixpkgs/lib/lists.nix:257:5, called from /home/mbauer/nixpkgs/pkgs/stdenv/generic/make-derivation.nix:132:17:
This is coming from how Nix handles string context and how
make-derivation messes with the "name" attribute. This commit should
restore the old behavior so you get a nice line number like:
error: while evaluating the attribute 'buildInputs' of the derivation 'hello-2.10' at /home/mbauer/nixpkgs/pkgs/applications/misc/hello/default.nix:4:3:
while evaluating 'chooseDevOutputs' at /home/mbauer/nixpkgs/lib/attrsets.nix:474:22, called from undefined position:
while evaluating 'optionals' at /home/mbauer/nixpkgs/lib/lists.nix:257:5, called from /home/mbauer/nixpkgs/pkgs/stdenv/generic/make-derivation.nix:132:17:
NOTE: This will still be broken for cross compilation due to the
prefixes we are adding to name.
This behavior ended up breaking the handleEvalIssue functionality by hiding those packages. So something like this:
$ nix-env -iA nixpkgs.zoom-us
would silently fail, without telling the user how to fix it! Regardless, this "bug" should be handled in Nix - not Nixpkgs.
Fixes#38952.
We can't run the checkPhase when build != host, so we may as well make
the checkInputs native.
This signicantly improves the situation of Python packages when enabling
strictDeps.
* add generic x86_32 support
- Add support for i386-i586.
- Add `isx86_32` predicate that can replace most uses of `isi686`.
- `isi686` is reinterpreted to mean "exactly i686 arch, and not say i585 or i386".
- This branch was used to build working i586 kernel running on i586 hardware.
* revert `isi[345]86`, remove dead code
- Remove changes to dead code in `doubles.nix` and `for-meta.nix`.
- Remove `isi[345]86` predicates since other cpu families don't have specific model predicates.
* remove i386-linux since linux not supported on that cpu
Hydra's page showing evaluation errors is about a mile long, showing
buckets of user-friendly errors, like this:
in job ‘seyren.aarch64-linux’:
Package ‘oraclejre-8u191’ in /nix/store/fa9zzkbljkvdavwzirkrr5irg25ymbjl-source/pkgs/development/compilers/oraclejdk/jdk-linux-base.nix:71 has an unfree license (‘unfree’), refusing to evaluate.
a) For `nixos-rebuild` you can set
{ nixpkgs.config.allowUnfree = true; }
in configuration.nix to override this.
b) For `nix-env`, `nix-build`, `nix-shell` or any other Nix command you can add
{ allowUnfree = true; }
to ~/.config/nixpkgs/config.nix.
in job ‘jetbrains.webstorm.x86_64-linux’:
Package ‘webstorm-2018.3.1’ in /nix/store/fa9zzkbljkvdavwzirkrr5irg25ymbjl-source/pkgs/applications/editors/jetbrains/default.nix:230 has an unfree license (‘unfree’), refusing to evaluate.
a) For `nixos-rebuild` you can set
{ nixpkgs.config.allowUnfree = true; }
in configuration.nix to override this.
b) For `nix-env`, `nix-build`, `nix-shell` or any other Nix command you can add
{ allowUnfree = true; }
to ~/.config/nixpkgs/config.nix.
This makes it extremely hard to find actual issues in the output. This
patch set makes the output much more condensed in Hydra:
Failed to evaluate nifticlib-2.0.0: «unsupported»: is not supported on ‘x86_64-apple-darwin’
Failed to evaluate dmd-2.081.2: «unsupported»: is not supported on ‘aarch64-unknown-linux-gnu’
Failed to evaluate dmdBuild-2.081.2: «unsupported»: is not supported on ‘aarch64-unknown-linux-gnu’
Failed to evaluate ldc-1.11.0: «unsupported»: is not supported on ‘aarch64-unknown-linux-gnu’
Failed to evaluate ldcBuild-1.11.0: «unsupported»: is not supported on ‘aarch64-unknown-linux-gnu’
Failed to evaluate ldc-0.17.5: «unsupported»: is not supported on ‘aarch64-unknown-linux-gnu’
Failed to evaluate ldcBuild-0.17.5: «unsupported»: is not supported on ‘aarch64-unknown-linux-gnu’
Some trivial builders use the name attr to choose the exec name
produced. For example nixos-install,
nixos-install = makeProg {
name = "nixos-install";
src = ./nixos-install.sh;
nix = config.nix.package.out;
path = makeBinPath [ nixos-enter ];
};
When cross compiling, this puts the prog in,
/bin/nixos-install-powerpc64le-unknown-linux-gnu
When separateDebugInfo = true & !stdenv.hostPlatform.isLinux, we
always gave an error. There is no reason to do this. Instead, just
don’t add separate debug info when we aren’t on Linux.
propagateNativeBuildInputs will end up going in the output derivation.
This case is allowed to end up in references because of that. Sorry
for the disruption!
Fixes#50865
Completely breaks darwin. Every package in the stdenv that has shebangs
in the output will end up with references to bootstrap-tools.
This reverts commit eb7c50a993.
This will make the list much easier to re-use, eg. for `nixosTests`
The drawback is that this approaches makes the
```
nix-build release.nix -A tests.opensmtpd.x86_64-linux
```
command about twice as slow (3s to 6s): it now has to evaluate `nixpkgs`
once for each architecture, instead of just having the hardcoded list of
tests that allowed to say “ok just evaluate for x86_64-linux”.
On the other hand, complete evaluation of `release.nix` should be much
faster because we no longer import `nixpkgs` for each test: testing with
the following command went from 30s to 18s, and that's just for a few
tests.
```
time nix-instantiate --eval --strict nixos/release.nix -A tests.nat
```
I initially wanted to test on the whole `release.nix`, but there are too
many broken tests and it takes too long to eval them all, especially
compared to the fact that the current implementation breaks some setup.
Given developers can just `nix-build nixos/tests/my-test.nix`, it sounds
like an overall win.
Fixes#49071
On ld.gold, we produce broken executables when linking with the Musl
libc. This appears to be a known bug when using ld.gold and Musl. This
thread describes the workaround as enabling PIE when using ld.gold and
Musl:
https://www.openwall.com/lists/musl/2015/05/01/5
By default we don’t enable PIE to avoid breaking things. But in the
Musl case we are breaking things by not enabling PIE. So this adds a
special case for defaultHardeningFlags which keeps the pie hardening
for everything. Any packages that break with PIE can add the pie flag
to disableHardeningFlags array (a no-op for now on anything but Musl).
When strictDeps = true, we don’t want native build inputs to end up in
the output. For instance gcc is a builtin native build input and
should only show up in an output if it is also listed in buildInputs.
/cc @ericson2314
In strictDeps=false, autoPatchshebangs should use
--build (corresponding to PATH) to lookup commands. This restores the
previous behavior of patchshebangs so that we don’t break stuff that
isn’t careful in the buildInputs vs. nativeBuildInputs distinction.
Unfortunately this won’t work under cross compilation.
Rationale
---------
Currently, tests are hard to discover. For instance, someone updating
`dovecot` might not notice that the interaction of `dovecot` with
`opensmtpd` is handled in the `opensmtpd.nix` test.
And even for someone updating `opensmtpd`, it requires manual work to go
check in `nixos/tests` whether there is actually a test, especially
given not so many packages in `nixpkgs` have tests and this is thus most
of the time useless.
Finally, for the reviewer, it is much easier to check that the “Tested
via one or more NixOS test(s)” has been checked if the file modified
already includes the list of relevant tests.
Implementation
--------------
Currently, this commit only adds the metadata in the package. Each
element of the `meta.tests` attribute is a derivation that, when it
builds successfully, means the test has passed (ie. following the same
convention as NixOS tests).
Future Work
-----------
In the future, the tools could be made aware of this `meta.tests`
attribute, and for instance a `--with-tests` could be added to
`nix-build` so that it also builds all the tests. Or a `--without-tests`
to build without all the tests. @Profpatsch described in his NixCon talk
such systems.
Another thing that would help in the future would be the possibility to
reasonably easily have cross-derivation nix tests without the whole
NixOS VM stack. @7c6f434c already proposed such a system.
This RFC currently handles none of these concerns. Only the addition of
`meta.tests` as metadata to be used by maintainers to remember to run
relevant tests.
Uses uname data to find what to set these variables:
- CMAKE_SYSTEM_NAME
- CMAKE_SYSTEM_PROCESSOR
- CMAKE_SYSTEM_VERSION
- CMAKE_HOST_SYSTEM_NAME
- CMAKE_HOST_SYSTEM_PROCESSOR
- CMAKE_HOST_SYSTEM_VERSION
The isELF function only checks whether ELF is contained within the first
4 bytes of the file, which is a bit fuzzy and will also return
successful if it's a text file starting with ELF, for example:
ELF headers
-----------
Some text here about ELF headers...
So instead, we're now doing a precise match on \x7fELF.
Signed-off-by: aszlig <aszlig@nix.build>
Acked-by: @Ericson2314
Closes: https://github.com/NixOS/nixpkgs/pull/47244
The `unfree` and `unfreeRedistributable` licenses both have `free = false`,
which will trigger the first portion of logic. This removes dead code to
simplify the logic.
As a follow-up, I plan to add an attribute `redistributable = [true|false]`,
which can be used by Hydra to determine whether a given package with a given
license can be included in the channel.
If meta.outputsToInstall is set to include absent outputs, various
tools break including channel updates and nix-env.
grahamc@Morbo> nix-env -i -f . -A elf-header-real
installing 'elf-header'
error: this derivation has bad 'meta.outputsToInstall'
This patch verifies each value in meta.outputsToInstall is a valid
output. It validates this condition only if checkMeta is true.
grahamc@Morbo> nix-build . -A elf-header-real
error: Package ‘elf-header’ in /home/grahamc/projects/nixpkgs/pkgs/development/libraries/elf-header/default.nix:36 has invalid meta.outputsToInstall, refusing to evaluate.
The package elf-header has set meta.outputsToInstall to: bin
however elf-header only has the outputs: out
and is missing the following ouputs:
- bin
(use '--show-trace' to show detailed location information)
Note, now the nix-env experience is decidedly worse for users who have
checkMeta set to true:
grahamc@Morbo> nix-env -i -f . -A elf-header-real; echo $?
0
though since this is already an issue for unfree, broken, unsupported,
and insecure validity problems I'm not sure we should do something
different here.
a4630c65ca was incorrect in assuming $SHELL would be a path to the
bash derivation. In fact $SHELL will be a path to the bash executable.
Unfortunately this did not fix the original issue. So instead, we just
have to reuse initialPath can be added like PATH is.
Sorry for the inconvenience! I hadn’t thought through the effects of
the last commit.
/cc @copumpkin @ericson2314
To avoid breaking things, we need to make sure SHELL goes into
HOST_PATH. This reflects my changes to patch-shebangs to make it cross
compilation ready. When a script is patched from the Nix store it now
looks to HOST_PATH to get the targeted machine’s executables.
Unfortunately, this only works in native builds.
Intuitively, one cares mainly about the host platform: Platforms differ
in meaningful ways but compilation is morally a pure process and
probably doesn't care, or those difference are already abstracted away.
@Dezgeg also empirically confirmed that > 95% of checks are indeed of
the host platform.
Yet these attributes in the old cross infrastructure were defined to be
the build platform, for expediency. And this was never before changed.
(For native builds build and host coincide, so it isn't clear what the
intention was.)
Fixing this doesn't affect native builds, since again they coincide. It
also doesn't affect cross builds of anything in Nixpkgs, as these are no
longer used. It could affect external cross builds, but I deem that
unlikely as anyone thinking about cross would use more explicit
attributes for clarity, all the more so because the rarity of inspecting
the build platform.
Works similarly to `enableParallelBuilding`, but is set by default when
`enableParallelBuilding` is set. In my experience most packages that build
fine in parallel also check fine in parallel.
Derivations where drawing their `system` attribute from `hostPlatform`
instead of `buildPlatform`. Fix that, and add an explanatory commment.
Fixes#45993
This has been not touched in 6 years. Let's remove it to cause less
problems when adding new cross-compiling infrastructure.
This also simplify gcc significantly.
I *want* cross-specific overrides to be verbose, so I rather not have
this shorthand. This makes the syntactic overhead more proportional to
the maintainence cost. Hopefully this pushes people towards fewer
conditionals and more abstractions.
* substitute(): --subst-var was silently coercing to "" if the variable does not exist.
* libffi: simplify using `checkInputs`
* pythonPackges.hypothesis, pythonPackages.pytest: simpify dependency cycle fix
* utillinux: 2.32 -> 2.32.1
https://lkml.org/lkml/2018/7/16/532
* busybox: 1.29.0 -> 1.29.1
* bind: 9.12.1-P2 -> 9.12.2
https://ftp.isc.org/isc/bind9/9.12.2/RELEASE-NOTES-bind-9.12.2.html
* curl: 7.60.0 -> 7.61.0
* gvfs: make tests run, but disable
* ilmbase: disable tests on i686. Spooky!
* mdds: fix tests
* git: disable checks as tests are run in installcheck
* ruby: disable tests
* libcommuni: disable checks as tests are run in installcheck
* librdf: make tests run, but disable
* neon, neon_0_29: make tests run, but disable
* pciutils: 3.6.0 -> 3.6.1
Semi-automatic update generated by https://github.com/ryantm/nixpkgs-update tools. This update was made based on information from https://repology.org/metapackage/pciutils/versions.
* mesa: more include fixes
mostly from void-linux (thanks!)
* npth: 1.5 -> 1.6
minor bump
* boost167: Add lockfree next_prior patch
* stdenv: cleanup darwin bootstrapping
Also gets rid of the full python and some of it's dependencies in the
stdenv build closure.
* Revert "pciutils: use standardized equivalent for canonicalize_file_name"
This reverts commit f8db20fb3a.
Patching should no longer be needed with 3.6.1.
* binutils-wrapper: Try to avoid adding unnecessary -L flags
(cherry picked from commit f3758258b8895508475caf83e92bfb236a27ceb9)
Signed-off-by: Domen Kožar <domen@dev.si>
* libffi: don't check on darwin
libffi usages in stdenv broken darwin. We need to disable doCheck for that case.
* "rm $out/share/icons/hicolor/icon-theme.cache" -> hicolor-icon-theme setup-hook
* python.pkgs.pytest: setupHook to prevent creation of .pytest-cache folder, fixes#40273
When `py.test` was run with a folder as argument, it would not only
search for tests in that folder, but also create a .pytest-cache folder.
Not only is this state we don't want, but it was also causing
collisions.
* parity-ui: fix after merge
* python.pkgs.pytest-flake8: disable test, fix build
* Revert "meson: 0.46.1 -> 0.47.0"
With meson 0.47.0 (or 0.47.1, or git)
things are very wrong re:rpath handling
resulting in at best missing libs but
even corrupt binaries :(.
When we run patchelf it masks the problem
by removing obviously busted paths.
Which is probably why this wasn't noticed immediately.
Unfortunately the binary already
has a long series of paths scribbled
in a space intended for a much smaller string;
in my testing it was something like
lengths were 67 with 300+ written to it.
I think we've reported the relevant issues upstream,
but unfortunately it appears our patches
are what introduces the overwrite/corruption
(by no longer being correct in what they assume)
This doesn't look so bad to fix but it's
not something I can spend more time on
at the moment.
--
Interestingly the overwritten string data
(because it is scribbled past the bounds)
remains in the binary and is why we're suddenly
seeing unexpected references in various builds
-- notably this is is the reason we're
seeing the "extra-utils" breakage
that entirely crippled NixOS on master
(and probably on staging before?).
Fixes#43650.
This reverts commit 305ac4dade.
(cherry picked from commit 273d68eff8)
Signed-off-by: Domen Kožar <domen@dev.si>
`depsHostBuild` is not a thing, would never be a thing per the rules,
and isn't used anywhere. This is just my typo, hitherto unnoticed
because "host -> host" dependencies are by far the most obscure form.
HOST_PATH contains the path of the host package. This will include the
packages listed in buildInputs & depsHostHost. Use this to find
runtime commands that the host needs.
For instance to find the runtime version of perl,
$ PATH="$HOST_PATH" command -v perl
/nix/store/...-perl-5.28.0-aarch64-unknown-linux-android/bin/perl
This path should not be executed directly (it will break for cross
compilation). Only use it to find the location of executables that
will be run by your host system. Your build tools will, as always, be
available on the default PATH.
The line was essentially checking whether /bin/sh exists and is
executable and if that's the case, the isScript function returns
successfully.
When asking the author of this line on IRC it seems that even they can't
remember or imagine what this was supposed to be.
In summary: Whenever /bin/sh doesn't exist during a build, *any* file
given to isScript is reported as being a script even if it isn't.
This is kinda counter-intuitive and not something what somebody would
expect from a function called "isScript".
Signed-off-by: aszlig <aszlig@nix.build>
Cc: @edolstra
Not only does the suffix unnecessarily reduce sharing, but it also breaks
unpacker setup hooks (e.g. that of `unzip`) which identify interesting tarballs
using the file extension.
This also means we can get rid of the splicing hacks for fetchers.
We want `buildPackages` to be almost the same as
`buildPackages.buildPackges`, but that is only true if most packages
don't care about the target platform. The commented code however made
them all care about whether the target platform was Darwin.
The hack of using `crossConfig` to enforce stricter handling of
dependencies is replaced with a dedicated `strictDeps` for that purpose.
(Experience has shown that my punning was a terrible idea that made more
difficult and embarrising to teach teach.)
Now that is is clear, a few packages now use `strictDeps`, to fix
various bugs:
- bintools-wrapper and cc-wrapper
Note that a bunch of non-python packages use this attribute already.
Some of those are clearly unaware of the fact that this attribute does
not exists in stdenv because they define it but don't to add it to
their `bulidInputs` :)
Also note that I use `buildInputs` here and only handle regular
builds because python and haskell builders do it this way and I'm not
sure how to properly handle the cross-compilation case.