... for x86_64-darwin (into staging-next)
It wouldn't bootstrap otherwise.
Unfortunately we still haven't managed to get the tarballs
on the proper URLs, but GitHub should be reliable enough
and surely almost noone will bootstrap themselves anyway.
This is not the "correct" way to check if a variable is non null in
bash. There is already an instance of the "right" way to do it in
setup.sh. Bash is "generous" enough to accept the original input though.
I couldn't find the relevant shellcheck.
This confused the hell out of me, as I didn't spot the
> The following flags are disabled by default ...
when reading about `pie`, because that sentence was hidden in the
previous hardening flag's section.
Also explain that `pie` hardening is on by default on musl.
Only bash 4+ works in setup.sh. To make sure this is obvious, we can
check BASH_VERSINFO to get the major version number of Bash.
While Bash 3 is pretty rare, it still comes stock in macOS.
We *could* provide a warning here for non-Bash shells, but it’s not
always clear whether they will work or not. Zsh should have no trouble
while busybox sh, fish, or any others. There’s no great way to detect
what feature set the shell supports.
Fixes#71625
This enables the bootstrap stdenv test to specify the actual llvm
of the newly generated build instread of assuming it's the same version
as the current stdenv.
With removeUnknownConfigureFlags, it's impossible to express a package
that needs --enable-static, but will not accept --disable-shared,
without overriding the result of removeUnknownConfigureFlags _again_
in pkgs/top-level/static.nix.
It would be much better (and more in line with the rest of Nixpkgs) if
we encoded changes needed for static builds in package definitions
themselves, rather than in an ever-expanding list in static.nix. This
is especially true when doing it in static.nix is going to require
multiple overrides to express what could be expressed with stdenv
options.
So as a step in that direction, and to fix the problem described
above, here I replace removeUnknownConfigureFlags with a new stdenv
option, dontAddStaticConfigureFlags. With this mechanism, a package
that needs one but not both of the flags just needs to set
dontAddStaticConfigureFlags and then set up configureFlags manually
based on stdenv.hostPlatform.isStatic.
Changes to llvmPackages have caused the `libclang-cpp*.dylib` files to
be included in the `clang-unwrapped.lib` output. So we no longer need to
copy them from libclang.
`TargetConditionals.h` was missing several definitions, like
`TARGET_OS_TV` that are part of SDK 10.12 at least. And one that doesn't
seem to occur in any SDK afaict, `TARGET_OS_EMBEDDED_OTHER`.
I added the definitions from SDK 10.12 verbatim and defined
`TARGET_OS_EMBEDDED_OTHER` to be equal to `0`.
This is a modified version of a patch to avoid a stdenv rebuild.
I was having a hard time testing new bootstrapFiles because
`make-bootstrap-tools.nix` imports `pkgspath` but does not pass anything
but the current system.
This is merely for convenience and I'm not entirely certain it's a
sensible thing to do, maybe generating new bootstrapFiles while
overriding the current bootstrapFiles isn't something you're supposed to
do?
The rpath structure for the bootstrap tools was reworked to minimize
the amount of rewriting required on unpack, but the test was not
updated to match the different structure.
Additionally [1] builds that use the bootstrap version of libc++
cannot find libc++abi if the reference includes the "lib"
component (ie, libc++ refers to libc++abi with
@rpath/lib/libc++abi.dylib).
[1] https://logs.nix.samueldr.com/nix-darwin/2021-05-18#4993282
Test failure observed on Hydra: https://hydra.nixos.org/build/143130126
This will begin the process of breaking up the `useLLVM` monolith. That
is good in general, but I hope will be good for NetBSD and Darwin in
particular.
Co-authored-by: sterni <sternenseemann@systemli.org>
If things build fine with `stdenvNoCC`, let them use that. If tools
might be prefixed, prepare for that, either by directly splicing or just
using the env vars provided by the wrapper setup-hooks.
Co-authored-by: Dmitry Kalinkin <dmitry.kalinkin@gmail.com>
I am taking the non-invasive parts of #110914 to hopefully help out with #111988.
In particular:
- Use `lib.makeScopeWithSplicing` to make the `darwin` package set have
a proper `callPackage`.
- Adjust Darwin `stdenv`'s overlays keeping things from the previous
stage to not stick around too much.
- Expose `binutilsNoLibc` / `darwin.binutilsNoLibc` to hopefully get us
closer to a unified LLVM and GCC bootstrap.
Also begin to start work on cross compilation, though that will have to
be finished later.
The patches are based on the first version of
https://reviews.llvm.org/D99484. It's very annoying to do the
back-porting but the review has uncovered nothing super major so I'm
fine sticking with what I've got.
Beyond making the outputs work, I also strove to re-sync the packages,
as they have been drifting pointlessly apart for some time.
----
Other misc notes, highly incomplete
- lvm-config-native and llvm-config are put in `dev` because they are
tools just for build time.
- Clang no longer has an lld dep. That was introduced in
db29857eb3, but if clang needs help
finding lld when it is used we should just pass it flags / put in the
resource dir. Providing it at build time increases critical path
length for no good reason.
----
A note on `nativeCC`:
`stdenv` takes tools from the previous stage, so:
1. `pkgsBuildBuild`: `(?1, x, x)`
2. `pkgsBuildBuild.stdenv.cc`: `(?0, ?1, x)`
while:
1. `pkgsBuildBuild`: `(?1, x, x)`
2. `pkgsBuildBuild.targetPackages`: `(x, x, ?2)`
3. `pkgsBuildBuild.targetPackages.stdenv.cc`: `(?1, x, x)`
Patch every `derivation` call in the bootsrap process to add it a
conditional `__contentAddressed` parameter.
That way, passing `contentAddressedByDefault` means that the entire
build closure of a system can be content addressed
Adding the hostSuffix to the end of the derivation's name is problematic
since some stuff, including user facing programs like nix-env rely on
the behavior of parseDrvName instead of pname and version.
builtins.parseDrvName currently thinks that the cross compilation target
added via hostSuffix is part of the version. This has the practical
consequence for example that nix-env would think a cross compiled
derivation would be an updated version of a native derivation of the
same package and version — breaking user's profiles.
We can easily prevent this by moving the hostSuffix in between pname and
version. In case name is passed to mkDerivation this is of course not
possible and we are forced to fall back to the old behavior.
This change could serve as a replacement for the migitation we
introduced with the -static appendix to pname in order to avoid
confusion between nix and nixStatic as outlined in the comment added
with this commit.
Support `mainProgram` as an attribute of `meta` for packages.
This is an attribute used by [`nix
run`](https://nixos.org/manual/nix/unstable/command-ref/new-cli/nix3-run.html#description)
to customize the main program of a package.
For example, `pkgs.neovim` provides a `/bin/nvim` executable which users
would (almost certainly) prefer `nix run` to execute instead of a
non-existing `/bin/neovim`.
Signed-off-by: Ana Hobden <operator@hoverbear.org>
`hasCC` was getting overridden in the cross bootstrapping (for GHCJS),
which preventing the default logic from re-triggering for `stdenvNoCC`.
Also remove `stdenv.noCC` which is obseleted by `stdenv.hasCC`.
Add a config field `contentAddressedByDefault` and an associated
environment variable `NIXPKGS_CA_BY_DEFAULT` to make every nixpkgs
derivation content-addressed by default
This was removed in e29b0da9c7, because
it was felt it was ambiguous whether isBSD should remove Darwin.
I think it should be reintroduced. Packages sometimes have their own
concepts of "is BSD" e.g. Lua, and these almost never include Darwin,
so let's keep Darwin excluded.
Without a way to say "is this BSD", one has to list all flavours of
BSD seperately, even though fundamentally they're still extremely
similar. I don't want to have to write the following!
stdenv.isFreeBSD || stdenv.isNetBSD || stdenv.isOpenBSD || stdenv.isDragonFlyBSD
Additionally, we've had stdenv.hostPlatform.isBSD this whole time, and
it hasn't hurt anything.
Unify the logic for constructing the name from pname and version and
modifying the name in case a host suffix needs to appended. This allows
us to modify the construction of name from pname and version without
having to duplicate it in two places.
4e9dc46dea re-enabled hardening for Musl,
which is good.
Though static builds for ARM fail in various ways
- cross armv7l static does not build
- cross aarch64 static produces segfaulting dynamically linked binaries
- native aarch64 static also produces segfaulting dynamically linked binaries
It seems that for native x86_64-linux, static builds are fine though.
This works around the issue by removing PIE from the hardening flags,
keeping all other hardening flags. This is an improvement (I think) from
before 4e9dc46d.
Fixes#114953
* stdenv/check-meta: change to allowlist and blocklist
* Update pkgs/stdenv/generic/check-meta.nix
Co-authored-by: Graham Christensen <graham@grahamc.com>
Since the deprecation is fairly recent, we should warn by default.
Also fix the wording of the comment: stdenv.lib will be removed for the 21.11
release, not just deprecated (as it already is deprecated).
The `platform` field is pointless nesting: it's just stuff that happens
to be defined together, and that should be an implementation detail.
This instead makes `linux-kernel` and `gcc` top level fields in platform
configs. They join `rustc` there [all are optional], which was put there
and not in `platform` in anticipation of a change like this.
`linux-kernel.arch` in particular also becomes `linuxArch`, to match the
other `*Arch`es.
The next step after is this to combine the *specific* machines from
`lib.systems.platforms` with `lib.systems.examples`, keeping just the
"multiplatform" ones for defaulting.
Increase schedulingPriority of the bootstrap tools to unblock the
nixpkgs-unstable channel.
The channel is repeatedly blocked by the makeBootstrapTools job for
aarch64. The cause is lack of resources.
By increasing the priority, it should become the first job Hydra would
build, allowing the channel to advance quicker. Of course, it does mean
that while the channel advances, nothing else has been built.
This should be a temporary solution until we have more capacity for
aarch64.
By exporting it, we always make the new directories available
to subprocesses, regardless of whether the environment
variable existed before `nix-shell` was invoked.
This avoids the scenario where strictDeps is off and cross-compiled
XDG_DATA_DIRS content is brought into the environment.
While probably harmless for data like manpages and completion scripts,
this would cause issues when XDG_DATA_DIRS is used to find executables
or plugins. The Qt framework is known to behave like this and might
have run into incompatibilities.
XDG_DATA_DIRS is to /share as PATH is to /bin.
It was defined as part of the XDG basedir specification.
https://specifications.freedesktop.org/basedir-spec/basedir-spec-latest.html
While it originated from the X Desktop Group, it is not limited to
the X11 ecosystem, as evidenced by its use in bash-completion.
The removal of ` && -d "$pkg/bin"` is ok, because this optimization is
already performed by `addToSearchPath`.
The dynamic loader on powerpc64 is called ld64.so.2 rather than
ld-linux.so.*, and was not matched by the existing pattern.
We reuse the dynamicLinker name from binutils to match a wider set
of platforms and to avoid specifying this information in two places.
Build the llvm support libraries (libcxx, libcxxabi) from scratch
without using the existing llvm libraries. This is the same spirit and
similar implementation as the "useLLVM" bootstrap in llvm package
sets. Critically it avoids having libcxxabi provided by the cc-wrapper
when building libcxx, which otherwise results in two libcxxabi
instances.
$ otool -L /nix/store/vd4vvgs9xngqbjzpg3qc41wl6jh42s9i-libc++-7.1.0/lib/libc++.dylib
/nix/store/vd4vvgs9xngqbjzpg3qc41wl6jh42s9i-libc++-7.1.0/lib/libc++.dylib:
/nix/store/vd4vvgs9xngqbjzpg3qc41wl6jh42s9i-libc++-7.1.0/lib/libc++.1.0.dylib (compatibility version 1.0.0, current version 1.0.0)
/nix/store/gmpwk5fyp3iasppqrrdpswxvid6kcp8r-libc++abi-7.1.0/lib/libc++abi.dylib (compatibility version 1.0.0, current version 1.0.0)
/nix/store/3hn7azynqgp2pm5gpdg45gpq0ia72skg-libc++abi-7.1.0/lib/libc++abi.dylib (compatibility version 1.0.0, current version 1.0.0)
/nix/store/1nq94scbxs6bk7pimqhvz76q6cfmbv97-Libsystem-osx-10.12.6/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1226.10.1)
Additionally move some utilities (clang, binutils, coreutils, gnugrep)
to the stage layers so they can be replaced before the final
stdenv. This should cause most of stage4 to be built from the
toolchain assembled as of stage3 instead of the bootstrap toolchain.
This new version has tapi support, which is needed to build the new
stubs based libSystem, etc. and Big Sur support.
You can verify the provenance of these yourself by checking Hydra here:
https://hydra.nixos.org/build/128192471
This adds -frandom-seed to each compiler invocation in stdenv. The
object here is to make the compierl invocations produce the same output
every time they are called (for the same derivation). When the
-frandom-seed option is not set the compiler will use a combination of
random numbers (in GCC's case from /dev/urandom) and the durrent time to
produce a "random" input per file. This can (among other things) lead to
different ordering of symbols in the produced object files.
For reason of reproducibility we prefer having the same derivation
produce the exact same outputs. This is not a silver bullet but one way
to tame the compiler.
I made a mistake merge. Reverting it in c778945806 undid the state
on master, but now I realize it crippled the git merge mechanism.
As the merge contained a mix of commits from `master..staging-next`
and other commits from `staging-next..staging`, it got the
`staging-next` branch into a state that was difficult to recover.
I reconstructed the "desired" state of staging-next tree by:
- checking out the last commit of the problematic range: 4effe769e2
- `git rebase -i --preserve-merges a8a018ddc0` - dropping the mistaken
merge commit and its revert from that range (while keeping
reapplication from 4effe769e2)
- merging the last unaffected staging-next commit (803ca85c20)
- fortunately no other commits have been pushed to staging-next yet
- applying a diff on staging-next to get it into that state
defaultHardeningFlags is set to enable pie for Musl, but is not
actually used because the default is never put into
NIX_HARDENING_ENABLE. That still works for cases other than Musl
only because NIX_HARDENING_ENABLE is defaulted in the binutils and
cc-wrapper setup-hook.sh scripts.
This hook moves systemd user service file from `lib/systemd/user` to
`share/systemd/user`. This is to allow systemd to find the user
services when installed into a user profile. The `lib/systemd/user`
path does not work since `lib` is not in `XDG_DATA_DIRS`.
The Rust `cc` crate started running `xcrun` when SDKROOT is defined:
a970b0ab0b
Consequently, building crates that use newer versions of the `cc`
crate fail, because xcrun is not available in pure build environments.
This provides consistency with the pure stdenv, which provides
patchelf this way. Native stdenv can always just manually install
patchelf on their system, but like xz, it’s unlikely to be provided in
/usr/bin/. In addition, it’s not even in the RHEL7 repos.
This introduces the .inputDerivation attribute on all derivations
created with mkDerivation. This is another derivation that can always
build successfully and whose runtime dependencies are the build time
dependencies of the original derivation.
This allows easy building and distributing of all derivations needed to
enter a nix-shell with
nix-build shell.nix -A inputDerivation
Update gnu-config (config.sub, config.guess) to suport the Genode
platform and apply the updateAutotoolsGnuConfigScriptsHook to Genode
cross-compilation.
I hate the thing too even though I made it, and rather just get rid of
it. But we can't do that yet. In the meantime, this brings us more
inline with autoconf and will make it slightly easier for me to write a
pkg-config wrapper, which we need.
The cross file is added in the `mkDerivation`. It isn't nice putting
build tool-specific stuff here, but our current architecture gives us
little alternative.
Currently it's not possible to determine the reason why a package is
unavailable without evaluating nixpkgs multiple times with different
settings. eg.
nix-repl> :p android-studio.meta
{ available = false; broken = false; unfree = true; unsupported = true; ... }
The following snippet is an example that uses this information to query
the availability information of all packages in nixpkgs, giving an
overview of all the packages currently marked as broken, etc.
{ pkgs }:
with import <nixpkgs/lib>;
let
mapPkgs =
let mapPkgs' = path: f: mapAttrs (n: v:
let result = builtins.tryEval (v ? meta); in
if !result.success then {} else
if isDerivation v then f (path ++ [n]) v else
if isAttrs v && v.recurseForDerivations or false then mapPkgs' (path ++ [n]) f v else
{}
);
in mapPkgs' [];
getMeta = path: drv:
if drv.meta ? available then
let meta = {
pkg = concatStringsSep "." path;
inherit (drv.meta) broken unfree unsupported insecure;
};
in builtins.trace meta.pkg meta
else {};
metaToList = attrs: flatten (map (v: if v ? pkg then v else metaToList v) (attrValues attrs));
in metaToList (mapPkgs getMeta pkgs)
These files never existed, so best to not leave the reference. If
someone want to step up to maintain this, that would be fine. I don’t
have the hardware to test these out. In addition, someone tried to use
the bootstrap-tools currently built by Hydra and found that they were
broken in some unclear way.
The linker scripts no longer contain store paths, so this does nothing. More
importantly, libpthread.so is not longer a linker script on ARM, so the patching
would corrupt it.
There's a generated header that got comment about the source header
from glibc.dev, which added unwanted runtime dependency. Tested:
nix build -f pkgs/top-level/release.nix stdenvBootstrapTools.{aarch64,i686,x86_64}-linux.test
Fixes#21629
Passing these extra linker flags removes both the semi-random uuid
included in most binaries as well as making the sdk version consistent
instead of based on the current os version.
Load command 8
cmd LC_UUID
cmdsize 24
uuid 70FAF921-5DC8-371C-B814-4F121FADFDF4
Load command 9
cmd LC_VERSION_MIN_MACOSX
cmdsize 16
version 10.12
sdk 10.13
The -macosx_version_min flag isn't strictly necessary since that's
already handled by MACOSX_DEPLOYMENT_TARGET.
While looking at the graph of all the outputs in my personal binary
cache it became obvious that we have a lot of self references within the
package set. That isn't an isuse by itself. However it increases the
size of the binary cache for every (reproducible) build of a package
that carries references to itself. You can no longer deduplicate the
outputs since they are all unique. One of the ways to get rid of (a few)
references is to rewrite all the symlinks that are currently used to be
relative symlinks. Two build of something that didn't really change but
carries a self-reference can the be store as the same NAR file again.
I quickly hacked together this change to see if that would yield and
success. My bash scripting skills are probably not great but so far it
seem to somewhat work.
- Replaced python override from the final stdenv, instead we
propagate our bootstrap python to stage4 and override both
CF and xnu to use it.
- Removed CF argument from python interpreters, this is redundant
since it's not overidden anymore.
- Inherit CF from stage4, making it the same as the stdenv.
Before, we'd always use `cc = null`, and check for that. The problem is
this breaks for cross compilation to platforms that don't support a C
compiler.
It's a very subtle issue. One might think there is no problem because we
have `stdenvNoCC`, and presumably one would only build derivations that
use that. The problem is that one still wants to use tools at build-time
that are themselves built with a C compiler, and those are gotten via
"splicing". The runtime version of those deps will explode, but the
build time / `buildPackages` versions of those deps will be fine, and
splicing attempts to work this by using `builtins.tryEval` to filter out
any broken "higher priority" packages (runtime is the default and
highest priority) so that both `foo` and `foo.nativeDrv` works.
However, `tryEval` only catches certain evaluation failures (e.g.
exceptions), and not arbitrary failures (such as `cc.attr` when `cc` is
null). This means `tryEval` fails to let us use our build time deps, and
everything comes apart.
The right solution is, as usually, to get rid of splicing. Or, baring
that, to make it so `foo` never works and one has to explicitly do
`foo.*`. But that is a much larger change, and certaily one unsuitable
to be backported to stable.
Given that, we instead make an exception-throwing `cc` attribute, and
create a `hasCC` attribute for those derivations which wish to
condtionally use a C compiler: instead of doing `stdenv.cc or null ==
null` or something similar, one does `stdenv.hasCC`. This allows quering
without "tripping" the exception, while also allowing `tryEval` to work.
No platform without a C compiler is yet wired up by default. That will
be done in a following commit.
Rewrite the `stripHash` helper function with 2 differences:
* Paths starting with `--` will no longer produce an error.
* Use Bash string manipulation instead of shelling out to `grep` and
`cut`. This should be faster.
A bunch of stdenv-internal variables were deleted in
1601a7fcce, but these are needed in the
fixup phase, whereas the rest are just needed for the initial work
(findInputs, etc) before the user phases.
CC @matthewbauer
There were two issues:
* builtins.getEnv was called deep into the nixpkgs tree making it hard
to discover. This is solved by moving the call into
pkgs/top-level/impure.nix
* when the config was explicitly set by the user to false, it would
still try and load the environment variable. This meant that it was
not possible to guarantee the same outcome on two different systems.