file is part of the final stdenv and llvm_19 requires it for tests. add
file to the path to the early stage stdenv's for the upcoming switch to
llvm_19
This is basically harmless for the same reason as it is for Clang, and
lets us avoid doing wrapper hacks to fix things like the .NET build.
This reverts commit 4340a5addb.
The bootstrap tools linker sometimes crashes when trying to link the
sqlite3 tests, which causes the bootstrap Python not to have the sqlite3
module. This causes the freezegun module to fail to build later in the
bootstrap. Using the 11.0 SDK fixes the problem.
Upstream Python supports building with a newer SDK and back-deploying,
so this change should not negatively affect users on pre-11.0 releases.
While it would be nice if this could be split, there are too many
changes as part of the cleanup and improvements, including:
- Refactoring all propagated packages into functions that can be used to
ensure that packages are propagated only at the expected stages;
- Using a sanity-checking merge function to ensure that packages are
only propagated by one of the above functions;
- Reducing the number of Python builds during the bootstrap to one;
- Removing the extra sysctl stage;
- Using the LLVM bootstrap to build LLVM, clang, libc++, etc;
- Propagating llvmPackages_<version> in the final stdenv, so that
packages needing that version specifically don’t have to rebuild it;
- Bootstrapping with the new Darwin SDK; and
- Reducing the overall number of paths build during a bootstrap by ~33%.
For a long time, we've had `crossLibcStdenv`, `*Cross` libc attributes,
and `*bsdCross` pre-libc package sets. This was always bad because
having "cross" things is "not declarative": the naming doesn't reflect
what packages *need* but rather how we *provide* something. This is
ugly, and creates needless friction between cross and native building.
Now, almost all of these `*Cross` attributes are gone: just these are
kept:
- Glibc's and Musl's are kept, because those packages are widely used
and I didn't want to risk changing the native builds of those at this
time.
- generic `libcCross`, `theadsCross`, and friends, because these relate
to the convolulted GCC bootstrap which still needs to be redone.
The BSD and obscure Linux or freestnanding libcs have conversely all
been made to use a new `stdenvNoLibc`, which is like the old
`crossLibcStdenv` except:
1. It usable for native and cross alike
2. It named according to what it *is* ("a standard environment without
libc but with a C compiler"), rather than some non-compositional
jargon ("the stdenv used for building libc when cross compiling",
yuck).
I should have done this change long ago, but I was stymied because of
"infinite recursions". The problem was that in too many cases we are
overriding `stdenv` to *remove* things we don't need, and this risks
cyles since those more minimal stdenvs are used to build things in the
more maximal stdenvs.
The solution is to pass `stage.nix` `stdenvNoCC`, so we can override to
*build up* rather than *tear down*. For now, the full `stdenv` is also
passed, so I don't need to change the native bootstraps, but I can see
this changing as we make things more uniform and clean those up.
(adapted from commit 51f1ecaa59)
(adapted from commit 1743662e55)
This change switches the xar package from unmaintained fork of the
original project to the Apple Open Source tarball. See also
https://repology.org/project/xar/versions
Since the package is essentially rewritten from scratch, we take an
opportunity and move it to pkgs/by-name/xa/xar (formatted with nixfmt).
We also remove Windows from the supported platforms because even before
this change pkgsCross.mingwW64.xar failed with
xar> configure: error: can not detect the size of your system's uid_t type
This reduces the number of Python builds in the bootstrap to two: a minimal build and a normal build. Both have LTO disabled, which is required due to missing LLVM LTO libraries. This is necessary to correctly enable LTO builds in Python because it needs `llvm-ar` from `stdenv.cc.cc.libllvm`, which does not exist in the bootstrap.
Fetchers can use the `curl` binary from the bootstrap tools. Allowing packages in the Darwin bootstrap to link curl makes any curl update cause a full rebuild on Darwin, which is undesirable.
GNU binutils is not preferred on Darwin, and newer versions have issues building. Make it an evaluation error to use it in the Darwin stdenv bootstrap.
- Only link binaries that exist for stage 0 cctools and LLVM bintools;
- Drop cctools-llvm in favor of the updated darwin.binutils; and
- Update llvm-manages Python overrides (needed for newer versions of LLVM).
Most Linux distributions are enabling this these days and it does
protect against real world vulnerabilities as demonstrated by
CVE-2018-16864 and CVE-2018-16865.
Fix#53753.
Information on llvm version support gleaned from
6609892a2d68e07da3e5092507a730
Information on gcc version support a lot harder to gather,
but both 32bit and 64bit arm do appear to be supported
based on the test suite.
libiconv-darwin depends on Meson, which (indirectly) depends on
libiconv. When libiconv-darwin is set as libiconv, it will cause an
infinite recursion. Avoid the infinite recursion by using libiconvReal
in stage 1. Every stage after that can use libiconv-darwin.
The cc and bintools wrapper contained ad hoc bootstrapping logic for
expand-response-params (which was callPackage-ed in a let binding). This
lead to the strange situation that the bootstrapping logic related to
expand-response-params is split between the wrapper derivations (where
it is duplicated) and the actual stdenv bootstrapping.
To clean this up, the wrappers simply should take expand-response-params
as an ordinary input: They need an adjacent expand-response-params (i.e.
one that runs on their host platform), but don't care about the how.
Providing this is only problematic during stdenv bootstrapping where we
have to pull it from the previous stage at times.
We don't need to artificially make sure that we can execute the wrapper
scripts on the build platform by using stdenv's shell (which comes from
buildPackages) since our cross infrastructure will get us the wrapper
from buildPackages. The upside of this change is that cross-compiled
wrappers (e.g. pkgsCross.aarch64-multiplatform.gcc) will actually work
when executed!
For bootstrapping this is also not a problem, since we have a long
build->build platform chain so runtimeShell is just as good as
stdenvNoCC.shell. We do fall back to old ways, though, by explicitly
using the bootstrap-tools shell in stage2, so the adjacent bash is only
used from stage4 onwards. This is unnecessary in principle (I'll try
removing this hack in the future), but ensures this change causes zero
rebuilds.
Setting the SDK root by default allows `overrideSDK` to correctly set
the SDK version when using a different SDK. It also allows the correct
SDK version to be set when using an older deployment target. Not setting
the correct SDK version can result in unexpected behavior at runtime.
Examples:
* Automatic dark mode switching requires linking against an SDK version
of 10.14 or newer. With the current behavior, the only way to do this
is by using a 10.14+ deployment target even when the application
supports older platforms when build with a newer SDK.
* MetalD3D checks that the system version is at least 14.0. The API it
uses returns a compatibility version when the the SDK is older than
11.0, which causes it to display an error and terminate the
application even when even when its requirements are all met.