If a CMake target has a non-default LINKER_LANGUAGE set, CMake will
manually add the libraries it has detected that language's compiler as
linking implicitly. When it does this, it'll pass -Bstatic and
-Bdynamic options based on the vibes it gets from each such detected
library. This in itself isn't a problem, because the compiler
toolchain, or our wrapper, or something, seems to be smart enough to
ignore -Bdynamic for those libraries. But it does create a problem if
the compiler adds extra libraries to the linker command line after
that final -Bdynamic, because those will be linked dynamically. Since
our compiler is static by default, CMake should reset to -Bstatic
after it's done manually specifying libraries, but CMake didn't
actually know that our compiler is static by default. The fix for
that is to tell it, like so.
Until recently, this problem was difficult to notice, because it would
result binaries that worked, but that were dynamically linked. Since
e08ce498f0 ("cc-wrapper: Account for NIX_LDFLAGS and NIX_CFLAGS_LINK
in linkType"), though, -Wl,-dynamic-linker is no longer mistakenly
passed for executables that are supposed to be static, so they end up
created with a /lib interpreter path, and so don't run at all on
NixOS.
This fixes pkgsStatic.graphite2.
As reported in #241692, since the `llvmPackages` bump the
bootstrap-tools started failing to build due to a mismatch in LLVM
versions used to build certain tools.
By overlaying the imported package set to specify `llvmPackages`, we get
everything built with the expected LLVM version.
Provide a `runPhase` function which wraps the phase running action of
genericBuild. The new function can be used as an interface by `nix
develop`, i.e. `nix develop some#flake --build` may just call `runPhase
build`, which makes its behavior more consistent with `nix build`.
In preparation of fixing https://github.com/NixOS/nix/issues/6202
The 10.12 Libsystem is not located as a sub-attribute of
`darwin.apple_sdk_10_12`. This will be fixed as part of the SDK changes
planned for post-23.11. In the meantime, special case it so the adapter
can be used to change the deployment target.
This was taken from #264091 to use in the interim before that PR lands
(sometime after the release of 23.11). It allows different versions of
clang to link the same libc++, allowing dependencies to be linked when
they are built with a different version of clang than the stdenv.
This patch switches the CoreFoundation on x86_64-darwin from the open
source swift-corelibs-foundation (CF) to the system CoreFoundation.
This change was motivated by failures building packages for the current
staging-next cycle #263535 due to an apparent incompatibility with the
rpath-based approach to choosing CF or CoreFoundation and macOS 14. This
error often manifests as a crash with an Illegal Instruction.
For example, building aws-sdk-cpp for building Nix will fail this way.
https://hydra.nixos.org/build/239459417/nixlog/1
Application Specific Information:
CF objects must have a non-zero isa
Error Formulating Crash Report:
PC register does not match crashing frame (0x0 vs 0x7FF8094DD640)
Thread 0 Crashed:: Dispatch queue: com.apple.main-thread
0 CoreFoundation 0x7ff8094dd640 CF_IS_OBJC.cold.1 + 14
1 CoreFoundation 0x7ff8094501d0 CF_IS_OBJC + 60
2 CoreFoundation 0x7ff8093155e8 CFRelease + 40
3 ??? 0x10c7a2c61 s_aws_secure_transport_ctx_destroy + 65
4 ??? 0x10c87ba32 aws_ref_count_release + 34
5 ??? 0x10c7b7adb aws_tls_connection_options_clean_up + 27
6 ??? 0x10c596db4 Aws::Crt::Io::TlsConnectionOptions::~TlsConnectionOptions() + 20
7 ??? 0x10c2d249c Aws::CleanupCrt() + 92
8 ??? 0x10c2d1ff0 Aws::ShutdownAPI(Aws::SDKOptions const&) + 64
9 ??? 0x102d9bc6f main + 335
10 dyld 0x202f333a6 start + 1942
According to a [post][1] on the Apple developer forums, hardening was
added to CoreFoundation, and this particular message occurs when you
attempt to release an object it does not recognize as a valid CF object.
(Thank you to @lilyinstarlight for finding this post).
When I switched aws-sdk-cpp to link against CoreFoundation instead of
CF, the error went away. Somehow both libraries were being used. To
prevent dependent packages from linking the wrong CoreFoundation, it
would need to be added as a propagated build input.
Note that there are other issues related to mixing CF and CoreFoundation
frameworks. #264503 fixes an issue with abseil-cpp where it propagates
CF, causing issues when using a different SDK version. Mixing versions
can also cause crashes with Python when a shared object is loaded that
is linked to the “wrong” CoreFoundation.
`NIX_COREFOUNDATION_RPATH` is supposed to make sure the right
CoreFoundation is being used, but it does not appear to be enough on
macOS 14 (presumably due to the hardening). While it is possible to
propagate CoreFoundation manually, the cleaner solution is to make it
the default. CF remains available as `darwin.swift-corelibs-foundation`.
[1]: https://developer.apple.com/forums/thread/739355
- These new-cli commands can be used with `-f`, in which case they're
evaluated with pure evaluation disabled.
- Nix 2.4+ is not part of the condition; "flakes" is fully descriptive
and more relatable.
- Don't suggest that it only enables this variable.
- Just don't say too much.
This is a replacement for using `darwin.apple_sdk_<ver>.callPackage`.
Instead of injecting the required packages, it provides a stdenv adapter
that modifies the derivation’s build inputs to use the requested SDK
versions. This modification extends to any build inputs propagated to it
as well. The `callPackage` approach is not deprecated yet, but it is
expected that it will be eventually.
Note that this is an MVP. It should work with most packages, but it only
handles build inputs and also only handles frameworks. Once more SDKs
are added (after #229210 is merged) and the SDK structure is normalized,
it can be extended to handle any package in the SDK namespace.
Cross-compilation may or may not work. Any cross-related issues can be
addressed after #256590 is merged.
While there is no fetcher or builder (in nixpkgs) that takes an `md5` parameter,
for some inscrutable reason the nix interpreter accepts the following:
```nix
fetchurl {
url = "https://www.perdu.com";
hash = "md5-rrdBU2a35b2PM2ZO+n/zGw==";
}
```
Note that neither MD5 nor SHA1 are allowed by the syntax of SRI hashes.
curl needs to link against several frameworks, but building the
frameworks (directly or indirectly) depends on curl via fetchurl and
fetchFromGitHub. Break the infinite recursion by building the SDKs’
dependencies in the last stage of the stdenv bootstrap using the prior
stage’s fetchurl and fetchFromGitHub.
Make both ready for cross with prefixes
Currently
`pkgsCross.aarch64-multiplatform.mold.passthru.tests.{wrapped,adapter}`
fail with
```
Testing running the 'hello' binary which should be linked with 'mold'
Hello, world!
Checking for mold in the '.comment' section
No mention of 'mold' detected in the '.comment' section
The command was:
aarch64-unknown-linux-gnu-readelf -p .comment ...bin/hello
The output was:
String dump of section '.comment':
[ 0] GCC: (GNU) 12.3.0
```
This will allow buliding bootstrap tools for platforms with
non-default libcs, like *-unknown-linux-musl.
This gets rid of limitedSupportSystems/systemsWithAnySupport. There
was no need to use systemsWithAnySupport for supportDarwin, because it
was always equivalent to supportedSystems for that purpose, and the
only other way it was used was for determining which platforms to
build the bootstrap tools for, so we might as well use a more explicit
parameter for that, and then we can change how it works without
affecting the rest of the Hydra jobs.
Not affecting the rest of the Hydra jobs is important, because if we
changed all jobs to use config triples, we'd end up renaming every
Hydra job. That might still be worth thinking about at some point,
but it's unnecessary at this point (and would be a lot of work).
I've checked by running
nix-eval-jobs --force-recurse pkgs/top-level/release.nix
that the actual bootstrap tools derivations are unaffected by this
change, and that the only other jobs that change are ones that depend
on the hash of all of Nixpkgs. Of the other jobset entrypoints that
end up importing pkgs/top-level/release.nix, none used the
limitedSupportedSystems parameter, so they should all be unaffected as
well.
When specifying the `builder` attribute in `stdenv.mkDerivation`, this
will be effectively transformed into
builtins.derivation {
builder = stdenv.shell;
args = [ "-e" builder ];
}
This also means that `default-builder.sh` is never sourced and as a
result it's not guaranteed that `$NIX_ATTRS_SH_FILE` is set to a correct
location[1].
Also, we need to source `.attrs.sh` to source `$stdenv`. So, the
following is done now:
* If `$NIX_ATTRS_SH_FILE` points to a correct location, then use it.
Directly using `.attrs.sh` is problematic for `nix-shell(1)` usage
(see previous commit for more context), so prefer the environment
variable if possible.
* Otherwise, if `.attrs.sh` exists, then use it. See [1] for when this
can happen.
* If neither applies, it can be assumed that `__structuredAttrs` is
turned off and thus nothing needs to be done.
[1] It's possible that it doesn't exist at all - in case of Nix 2.3 or
it can point to a wrong location on older Nix versions with a bug in
`__structuredAttrs`.
Relying on `.attrs.sh` to exist in `$NIX_BUILD_TOP` is problematic
because that's not compatible with how `nix-shell(1)` behaves. It places
`.attrs.{json,sh}` into a temporary directory and makes them accessible via
`$NIX_ATTRS_{SH,JSON}_FILE` in the environment[1]. The sole reason that
`nix-shell(1)` still works with structured-attrs enabled derivations
is that the contents of `.attrs.sh` are sourced into the
shell before sourcing `$stdenv/setup` (if `$stdenv` exists) by `nix-shell`.
However, the assumption that two files called `.attrs.sh` and
`.attrs.json` exist in `$NIX_BUILD_TOP` is wrong in an interactive shell
session and thus an inconsistency between shell debug session and actual
builds which can lead to unexpected problems.
To be precise, we currently have the following problem: an expression
like
with import ./. {};
runCommand "foo" { __structuredAttrs = true; foo.bar = [ 1 2 3 ]; }
''
echo "''${__structuredAttrs@Q}"
touch $out
''
prints `1` in its build-log. However when building interactively in a
`nix-shell`, it doesn't.
Because of that, I'm considering to propose a full deprecation of
`$NIX_BUILD_TOP/.attrs.{json,sh}`. A first step is to only mention the
environment variables, but not the actual paths anymore in Nix's
manual[2]. The second step - this patch - is to fix nixpkgs' stdenv
accordingly.
Please note that we cannot check for `-e "$NIX_ATTRS_JSON_FILE"` because
certain outdated Nix minors (that are still in the range of supported
Nix versions in `nixpkgs`) have a bug where `NIX_ATTRS_JSON_FILE` points
to the wrong file while building[3].
Also, for compatibility with Nix 2.3 which doesn't provide these
environment variables at all we still need to check for the existence of
.attrs.json/.attrs.sh here. As soon as we bump nixpkgs' minver to 2.4,
this can be dropped.
Finally, dropped the check for ATTRS_SH_FILE because that was never
relevant. In nix#4770 the ATTRS_SH_FILE variable was introduced[4] and
in a review iteration prefixed with NIX_[5]. In other words, these
variables were never part of a release and you'd only have this problem
if you'd use a Nix from a git revision of my branch from back then. In
other words, that's dead code.
[1] https://github.com/nixos/nix/pull/4770#issuecomment-834718851
[2] https://github.com/NixOS/nix/pull/9032
[3] https://github.com/NixOS/nix/issues/6736
[4] 3944a120ec
[5] 27ce722638
Without this, you get error messages during the install phase along the
lines of: "file RPATH_CHANGE could not write new RPATH:".
This is unsurprising because the static binaries do not have any dynamic
linker and thus, no runpath to rewrite either.
Tell cmake it doesn't need to do RPATH manipulation by passing
cmakeFlags.
While we're here, I also renamed `finalAttrs` to `args` and fixed the
indentation; this improves consistency with the surrounding code and
eliminates a point of confusion: because it was named `finalAttrs` I
presumed I should be able to influence it with an overrideAttrs setting
dontAddStaticConfigureFlags, but this turns out not to be possible;
adding prevAttrs as well doesn't work because of a limitation of
overrideAttrs whereby it gives an infinite recursion if the set of
attribute keys being returned depends on finalAttrs.
Signed-off-by: Peter Waller <p@pwaller.net>
Fixes `pkgsCross.musl64.llvmPackages_16.clang.cc` on `x86_64-linux`,
which used to fail with `/bin/sh: clang-tblgen: not found`.
Same hack is used in other projects:
https://github.com/search?q=%2FCMAKE_CROSSCOMPILING_EMULATOR.%2B%5C%2Fusr%5C%2Fbin%5C%2Fenv%2F+NOT+is%3Afork&type=code
Comment from 30435a9d0f/build/cmake/HostLinuxToolchain.cmake (L64)
> Required to run host Linux executables during the build itself.
> An example would be https://gitub.com/KhronosGroup/Vulkan-Loader and
> its "asm_offset" program.
>
> NOTE: Alternatives have been tried unsuccessfully, i.e.:
>
> With $(set CMAKE_CROSSCOMPILING_EMULATOR), the build fails because
> the CMake ninja/Make script tries to find the executable in the current
> path, as in:
>
> [3/16] Generating gen_defines.asm
> FAILED: loader/gen_defines.asm
> cd /tmp/cc/build-Vulkan-Loader/loader && asm_offset GAS
> /bin/sh: asm_offset: command not found
> ninja: build stopped: subcommand failed.
>
> With $(set CMAKE_CROSSCOMPILING_EMULATOR ""), the build fails because
> the shell cannot find the "" program as in:
>
> [3/16] Generating gen_defines.asm
> FAILED: loader/gen_defines.asm
> cd /tmp/cc/build-Vulkan-Loader/loader && "" /tmp/cc/build-Vulkan-Loader/loader/asm_offset GAS
> /bin/sh: : command not found
> ninja: build stopped: subcommand failed.
>
> It seems that the root of the problem comes from how the CMake function
> cmCustomCommandGenerator::GetArgc0Location() computes the target
> executable's location. At this point it's unclear whether this is a CMake
> bug or a feature.
Risicle discovered this hack.
Co-authored-by: Robert Scott <code@humanleg.org.uk>
In the default `fixupPhase` the output of `substituteAllStream` is
streamed to setup-hook.
`stdenv.cc.bintools.overrideAttrs { NIX_DEBUG = 6; }`
With `NIX_DEBUG` contains:
```
@expandResponseParams@ -> /nix/store/yl01rd58vp4m8bbhkihpk132cprfmx6f-expand-response-params/bin/expand-response-params
...
```
To work around intermitent build failures with clang 16, the stdenv
attempted to pass arguments on the command-line on newer versions of
macOS. Unfortunately, the larger `ARG_MAX` is still not large enough to
build qtwebengine. This commit reverts the `NIX_CC_NO_RESPONSE_FILE`
logic in the stdenv. The changes to cc-wrapper in #245282 are needed for
clang 16 to prevent the above-mentioned build failures.
This fixes pyicu (and any other package that uses `icu-config` instead
of the CMake or some other module to get the build flags).
What happened here is the bootstrap disables `patchShebangs` to avoid
propagating the bootstrap tools to the final stdenv (due to `sh` and
`bash` being on the `PATH` from the bootstrap tools). Because of that,
the `#!/bin/sh` line in `icu-config` was not updated, causing it to
invoke the system bash on Darwin. While that is undesirable in its own
right, when the system bash is invoked as `sh`, `echo -n` will print
`-n`, resulting in the breakage see in https://github.com/NixOS/nixpkgs/pull/241951#issuecomment-1627604354.
The fix is to build bash earlier in the bootstrap while making sure it
is picked up over the one in the bootstrap tools. That allows
`patchShebangs` to be enabled during the bootstrap. Any package with
scripts that is included in the final stdenv should now have its
scripts’ shebang lines properly patched.
When sandboxing is enabled, the hook tries to run `install_name_tool`
and fails because the system one is inaccessible. Having it use
`targetPrefix` allows it to find and use the cross-install_name_tool.
```
nix-repl> (pkgs.htop.overrideAttrs { pname = "hello-overriden"; }).pname
error:
… while evaluating a branch condition
at /nix/store/phn5cahwacv9wjgalygw62x8l4xbl6x3-source/lib/customisation.nix:86:7:
85| in
86| if builtins.isAttrs result then
| ^
87| result // {
… while calling the 'isAttrs' builtin
at /nix/store/phn5cahwacv9wjgalygw62x8l4xbl6x3-source/lib/customisation.nix:86:10:
85| in
86| if builtins.isAttrs result then
| ^
87| result // {
(stack trace truncated; use '--show-trace' to show the full trace)
error: attempt to call something which is not a function but a set
at /nix/store/phn5cahwacv9wjgalygw62x8l4xbl6x3-source/pkgs/stdenv/generic/make-derivation.nix:58:21:
57| f = self: super:
58| let x = f0 super;
| ^
59| in
```
swift-corelibs uses libcurl to implement `NSURLSession` in Foundation
via the symbols exported by CF. Foundation is not build on Darwin, and
these symbols are not exported by the system CoreFoundation.
By not linking against libcurl, this breaks a cycle between CF and
libcurl. That should allow libcurl to drop the patch disabling
linking against the SystemConfiguration and restore NAT64 support.
Unfortunately, the Darwin stdenv bootstrap still needs to build
dependencies that use `fetchFromGitHub`. While it can drop curl from the
final stdenv, it still needs to use it during the stdenv bootstrap.
In preparation for bumping the LLVM used by Darwin, this change
refactors and reworks the stdenv build process. When it made sense,
existing behaviors were kept to avoid causing any unwanted breakage.
However, there are some differences. The reasoning and differences are
discussed below.
- Improved cycle times - Working on the Darwin stdenv was a tedious
process because `allowedRequisites` determined what was allowed
between stages. If you made a mistake, you might have to wait a
considerable amount of time for the build to fail. Using assertions
makes many errors fail at evaluation time and makes moving things
around safer and easier to do.
- Decoupling from bootstrap tools - The stdenv build process builds as
much as it can in the early stages to remove the requirement that the
bootstrap tools need bumped in order to bump the stdenv itself. This
should lower the barrier to updates and make it easier to bump in the
future. It also allows changes to be made without requiring additional
tools be added to the bootstrap tools.
- Patterned after the Linux stdenv - I tried to follow the patterns
established in the Linux stdenv with adaptations made to Darwin’s
needs. My hope is this makes the Darwin stdenv more approable for
non-Darwin developers who made need to interact with it. It also
allowed some of the hacks to be removed.
- Documentation - Comments were added explaining what was happening and
why things were being done. This is particular important for some
stages that might not be obvious (such as the sysctl stage).
- Cleanup - Converting the intermediate `allowedRequisites` to
assertions revealed that many packages were being referenced that no
longer exist or have been renamed. Removing them reduces clutter and
should help make the stdenv bootstrap process be more understandable.
Makes overrideAttrs usable in the same way that `override` can be used.
It allows the first argument of `overrideAttrs` to be either a function
or an attrset, instead of only a function:
hello.overrideAttrs (old: { postBuild = "echo hello"; })
hello.overrideAttrs { postBuild = "echo hello"; }
Previously only the first example was possible.
Co-authored-by: adisbladis <adisbladis@gmail.com>
Co-authored-by: matthewcroughan <matt@croughan.sh>
Unlike autoreconfHook, updateAutotoolsGnuConfigScriptsHook adds
almost no compilations. Therefore, in the interest of building the
same source code on every platform wherever possible, let's
eliminate the conditional guards around
updateAutotoolsGnuConfigScriptsHook in stdenv.
cctools-llvm is a replacement for cctools that replaces as much of cctools with equivalents from LLVM that it can reasonably do. This was motivated by wanting to reduce dependencies on cctools, which are updated infrequently by upstream.
To provide a motivating example, the version of `strip` included in cctools cannot properly strip the archives in compiler-rt in LLVM 15. Paths are left to bootstrap tools, resulting in failed requisites checks in the final stdenv build. Since `strip` needs replaced, the opportunity was taken to replace other provided they are functional replacements.
Note: This has to be done in cctools (or some equivalent) because some derivations (noteably LLVM) use the bintools of the stdenv directly instead of going through the wrapper.
The following tools from LLVM are not used in this derivation:
* LLD - not fully compatible with ld64 yet and potentially too big of a change;
* libtool - not a drop-in replacement yet because it does not support linker passthrough, which is needed by xcbuild;
* lipo - crashes when running the LLVM test suite;
* install_name_tool - fails when trying to build swift-corefoundation; and.
* randlib - not completely a drop-in replacement, so leaving it out for now.
If other incompatabilities are found, the tools can be reverted or made conditional. For example, cctools `strip` is preferred on older versions of LLVM (which lack the compiler-rt issue) or when cctools itself is a new enough version because `llvm-strip` on LLVM 11 produces files that older verions of `codesign_allocate` cannot process correctly.
One final caveat/note: Some tools are not duplicated or linked from cctools-port. The names of the tools and which ones were linked was determined based on what is provided upstream in Xcode and is installed on macOS system.
passAsFile passes the values of Nix bindings to the builder as
files, so if those values contained references, they wouldn't end up
in the inputDerivation output. To fix that, append the contents of
every such passed file to the output.
We only have shell builtins in this derivation, so we can't use cat.
The only way I know of appending the contents of one file to another
using only shell builtins is as I've done here, but it requires
putting the contents of the file on echo's argv. This might end up
causing problems with large files. Regardless, I think we should try
this, as a failure is better than silently producing an incorrect
result like the previous behavior.
`nix-2.4+` automatically filters `__contentAddressed` out of the
environment. But not `nix-2.3`. This make `.drv` to differ between
unset and `__contentAddressed = false` derivations.
This change makes them equal by filtering out `__contentAddressed`
unless it's set to `true`.
musl now supports RISC-V. Let's centralise musl availability checks
in musl.meta.platforms, so we don't have to keep cleaning up ad-hoc
checks like this all over the tree.
The stdenv wouldn't build with it, as
compiler-rt-libc-11.1.0/lib/darwin/libclang_rt.*_osx.a
retained reference to SDKs (which we forbid for final stdenv).
Assigned authorship to Trofi; I just bisected and added condition.
https://github.com/NixOS/nixpkgs/pull/224669#issuecomment-1518225496
we have managed to migrate to NIX_CFLAGS_COMPILE to the env attrset well
enough that we don't need to support having it toplevel. mkDerivation
will throw if there's a attr in both env and toplevel so no need to
worry about that
I broke `pkgsMusl` with #209870.
Something odd is happening with `xgcc` (the temporary compiler that
should be used only to compile `gcc`, although we are using it to
compile a temporary `patchelf` too) and `libstdc++`.
The temporary fix in this commit is to use `-static-libstdc++` for
the ephemeral `patchelf` built by `xgcc`. It isn't pretty, but it
appears to work.
Incorporates:
- https://github.com/NixOS/nixpkgs/pull/224945
The stage before `xgcc` creates the first compiled patchelf
(i.e. not from bootstrapFiles).
The `xgcc` stage was inadvertently switching *back* to using the
patchelf *from* the bootstrapFiles.
The first commit in this PR adds self-checking comments (assertions)
to make it clear where each stage's patchelf comes from.
The second commit fixes the bug, and updates the self-checking
comments.
Without the change when I attempt to built `nixpkgs` with weekly
`gcc-13` (it pulls in `flex` as a build input`) I am getting build
failure related to glibc mix caused by glibc loading:
...-binutils-patchelfed-ld-2.40/bin/ld: ...-xgcc-13.0.0/libexec/gcc/x86_64-unknown-linux-gnu/13.0.1/liblto_plugin.so:
error loading plugin: ...-bootstrap-tools/lib/libpthread.so.0: undefined symbol: __libc_vfork, version GLIBC_PRIVATE
The change disables LTO plugin entirely to avoid loading of `glibc` mix.
This commit adds `gcc/common/checksum.nix`, which contains code
common to both gcc11 and gcc12, implementing the `enableChecksum`
feature.
When gcc's built-in bootstrap (`--enable-bootstrap`) is used, gcc
compiles itself three times and compares a hash of the unlinked `.o`
files from the second and third compilation. The
`enableChecksum=true` parameter performs the same comparison as part
of the `postInstall` phase.
Notably, `enableChecksum=true` can be used with `enableBootstrap=false`.
Co-authored-by: Sandro <sandro.jaeckel@gmail.com>
Our bootstrap-files unpacker has always relied on a lot of unstated
assumptions, one of them being that every library has a DT_NEEDED
for librt.so, so patchelf'ing something into the RUNPATH into
librt.so means that it will be searched for every library load in
all of the bootstrap-files.
Unfortunately that assumption is not true for libgcc.
This causes problems, because patchelf links against libgcc (and
against libstdc++, which links against libgcc). So we can't use
patchelf on libgcc, because it needs libgcc, so patchelf doesn't
work until libgcc is patchelfed.
The robust solution here is to use static linking for the copy of
patchelf that is shipped with the bootstrap-files. We don't have to
go all the way to a statically linked libc; just -static-libgcc and
-static-libstdc++ are enough to break the circular dependency.
Right now our bootstrapFiles-selecting algorithm uses the
`loongson2f.nix` bootstrapFiles (which were not built by Hydra).
These bootstrapFiles don't work anymore. They were added in 2010 by
40405d03ac.
This commit causes mipsel-linux native builds to use the Hydra-built
bootstrap files from this PR instead:
https://github.com/NixOS/nixpkgs/pull/183487
The NIX_LIB64|32_IN_SELF_RPATH environment variables control whether
to add lib64 and lib32 to rpaths. However, they're set depending
on the build paltform, not the target platform and thus their values
are incorrect for for cross-builds.
On the other hand, setting them according to the build platform introduce
pointless differences in build outputs; see #221350 for details.
This change fixes the issues by boldly removes the NIX_LIB*_IN_SELF_RPATH
facility altogether, in the hope that it is no longer necessary. They
were introduced in 2009, long before nixpkgs had good support for
cross-builds.
Fixes#221350
See https://github.com/NixOS/nixpkgs/pull/222792#pullrequestreview-1356114111
You can't just `lib.filter _ lib.systems.all` -- that throws away
important information, leading to nixpkgs disagreeing with itself
like this:
```
$ NIXPKGS_ALLOW_BROKEN=1 nix-instantiate . -A pkgsStatic.systemd
error: Package ‘systemd-252.5’ in ... is only supported on ... x86_64-linux but not on requested x86_64-linux, refusing to evaluate.
```
After:
```
$ NIXPKGS_ALLOW_BROKEN=1 nix-instantiate . -A pkgsStatic.systemd
error: Package ‘systemd-252.5’ in ... is not available on the requested hostPlatform:
hostPlatform.config = "x86_64-unknown-linux-musl"
package.meta.platforms = [
"aarch64-linux"
"armv5tel-linux"
"armv6l-linux"
"armv7a-linux"
"armv7l-linux"
"i686-linux"
"m68k-linux"
"microblaze-linux"
"microblazeel-linux"
"mipsel-linux"
"mips64el-linux"
"powerpc64-linux"
"powerpc64le-linux"
"riscv32-linux"
"riscv64-linux"
"s390-linux"
"s390x-linux"
"x86_64-linux"
]
package.meta.badPlatforms = [
{
isStatic = true;
parsed = { };
}
]
, refusing to evaluate.
```
The primary motivating example is openssl:
Before the change full package build took 1m54s minutes.
After the change full package build takes 59s.
About a 2x speedup.
The difference is visible because openssl builds hundreds of manpages
spawning a perl process per manual in `install` phase. Such a workload
is very easy to parallelize.
Another example would be `autotools`+`libtool` based build system where
install step requires relinking. The more binaries there are to relink
the more gain it will be to do it in parallel.
The change enables parallel installs by default only for buiilds that
already have parallel builds enabled. There is a high chance those build
systems already handle parallelism well but some packages will fail.
Consistently propagated the enableParallelBuilding to:
- cmake (enabled by default, similar to builds)
- ninja (set parallelism explicitly, don't rely on default)
- bmake (enable when requested)
- scons (enable when requested)
- meson (set parallelism explicitly, don't rely on default)
- waf (set parallelism explicitly, don't rely on default)
- qmake-4/5/6 (enable by default, similar to builds)
- xorg (always enable, similar to builds)
Hydra job building them: https://hydra.nixos.org/build/208909151
The bootstrap files can be reproduced on the commit 21ec906463, e.g. by:
cat $(nix-build pkgs/top-level/release.nix -QA stdenvBootstrapTools.aarch64-linux.dist)/nix-support/hydra-build-products
file tarball /nix/store/kdpbw0plmjqlafjnpbz31ja51m4bd2dk-stdenv-bootstrap-tools/on-server/bootstrap-tools.tar.xz
file busybox /nix/store/kdpbw0plmjqlafjnpbz31ja51m4bd2dk-stdenv-bootstrap-tools/on-server/busybox
and the hashes as well:
nix hash file /nix/store/kdpbw0plmjqlafjnpbz31ja51m4bd2dk-stdenv-bootstrap-tools/on-server/bootstrap-tools.tar.xz
sha256-aJvtsWeuQHbb14BGZ2EiOKzjQn46h3x3duuPEawG0eE=
nix hash path /nix/store/kdpbw0plmjqlafjnpbz31ja51m4bd2dk-stdenv-bootstrap-tools/on-server/busybox
sha256-0MuIeQlBUaeisqoFSu8y+8oB6K4ZG5Lhq8RcS9JqkFQ=
You can check this on any machine, as the builds are on cache.nixos.org
but also you can reproduce the hashes when rebuilt on aarch64-linux HW.
This was disabled here: b86e62d30d (diff-282a02cc3871874f16401347d8fadc90d59d7ab11f6a99eaa5173c3867e1a160)
h/t to @teh: b86e62d30d (commitcomment-77916294)
for pointing out that the failure that @matthewbauer was
seeing was caused by the `separate-debug-info.sh` `build-id` length
requirement that #146275 will relax
`lld` has had `--build-id` support dating back to LLVM4: https://reviews.llvm.org/D18091
This predates every `llvmPackages_` version currently in nixpkgs (and
certainly every version actually still used in `useLLVM` stdenvs) so
with the previous commit (asking `ld` for sufficiently long SHA1 hashes)
I think we can safely enable `separateDebugInfo` when using LLVM
bintools.
with structuredAttrs lists will be bash arrays which cannot be exported
which will be a issue with some patches and some wrappers like cc-wrapper
this makes it clearer that NIX_CFLAGS_COMPILE must be a string as lists
in env cause a eval failure
PR #208478 added a lot of documentation about which packages were
rebuilt in each stage of the stdenv bootstrap. However nothing
checks that these comments agree with reality; they can bitrot over
time. This PR rewrites those comments as assertions, so they cannot
bitrot.
This conversion did expose some ambiguity in our scheme for naming
the stages. Suppose that `pkgs.stdenv.name=="stdenv-stage4", then
which of these is "the stage4 coreutils"?
```
pkgs.coreutils
pkgs.stdenv.__bootPackages.coreutils
```
The choice is arbitrary, and both choices have confusing corner
cases. We should revisit this at some point.