fixes e.g.:
pkgsMusl.libfsm
pkgsMusl.libiscsi
pkgsMusl.nsjail
pkgsMusl.pv
match strings have whitespace on either side, which wasn't
matching leading/trailing arguments previously
fixes:
pkgsMusl.bulletml
pkgsMusl.proot
pkgsMusl.python3
Debian explains this issue well in the dpkg-buildflags manpage:
-fPIE
Can be linked into any program, but not a shared library (recommended).
-fPIC
Can be linked into any program and shared library.
On projects that build both programs and shared libraries you might need to
make sure that when building the shared libraries -fPIC is always passed last
(so that it overrides any previous -PIE) to compilation flags such as CFLAGS.
(from https://manpages.debian.org/bullseye/dpkg-dev/dpkg-buildflags.1.en.html#hardening)
In newer versions of mingw, programs compiled with FORTIFY_SOURCE need
to link to libssp or they will have link-time errors.
gmp has been broken since @pstn updated mingw-64 in c60a0b0447
GPRbuild is a multi language build system developed by AdaCore which
is mostly used for build Ada-related projects using GNAT.
Since GPRbuild is used to build itself and its dependency library
XML/Ada we first build a bootstrap version of it using the provided
bash build script bootstrap.sh as the gprbuild-boot derivation.
gprbuild-boot is then used to build xmlada and the proper gprbuild
derivation.
GPRbuild has its own search path mechanism via GPR_PROJECT_PATH which
we address via a setupHook. It currently works quite similar to the
pkg-config one: It accumulates all inputs into GPR_PROJECT_PATH,
GPR_PROJECT_PATH_FOR_BUILD etc. However this is quite limited at the
moment as we don't have a gprbuild wrapper yet which understands the
_FOR_BUILD suffix. However, we'll need to address this in the future
as it is currently basically impossible to test since the distinction
only affects cross-compilation, but it is not possible to build a GNAT
cross-compiler in nixpkgs at the moment (I'm working on changing that,
however).
Another issue we had to solve was GPRbuild not finding the right GNAT
via its gprconfig tool: GPRbuild has a knowledge base with compiler
definitions which run some checks and collect info about binaries
which are in PATH. In the end the first compiler in PATH that supports
the desired language is selected.
We want GPRbuild to discover our wrapped GNAT since the unwrapped one
is incapable of producing working binaries since it won't find the
crt*.o objects distributed with libc. GPRbuild however needs to find
the Ada runtime distributed with GNAT which is not part of the wrapper
derivation, so it will skip the wrapper and select the unwrapped GNAT.
Symlinking the unwrapped's lib directory into the wrapper fixes this
problem, but breaks linking in some cases (e. g. when linking against
OMP from gcc, the runtime variant will shadow the problem dynamic lib
from buildInputs). Additionally it uses gnatls as an indicator it has
found GNAT which is not part of the wrapper.
The solution we opted to adopt here is to install a custom compiler
description into gprbuild's knowledge base which properly detects the
nixpkgs GNAT wrapper: It uses gnatmake to detect GNAT instead of
gnatls and discovers the runtime via a symlink we add to
`$out/nix-support`. This additional definition is enough to properly
detect GNAT, since the plain wrapped gcc detection works out of the
box. It may, however, be necessary to add special definitions for
other languages in the future where gprbuild also needs to discover
the runtime.
One future improvement would be to install libgpr into a separate
output or split it into a separate derivation (which would require to
link gprbuild statically always since otherwise we end up with a
cyclical dependency).
This will begin the process of breaking up the `useLLVM` monolith. That
is good in general, but I hope will be good for NetBSD and Darwin in
particular.
Co-authored-by: sterni <sternenseemann@systemli.org>
This PR adds a new aarch64 android toolchain, which leverages the
existing crossSystem infrastructure and LLVM builders to generate a
working toolchain with minimal prebuilt components.
The only thing that is prebuilt is the bionic libc. This is because it
is practically impossible to compile bionic outside of an AOSP tree. I
tried and failed, braver souls may prevail. For now I just grab the
relevant binaries from https://android.googlesource.com/.
I also grab the msm kernel sources from there to generate headers. I've
included a minor patch to the existing kernel-headers derivation in
order to expose an internal function.
Everything else, from binutils up, is using stock code. Many thanks to
@Ericson2314 for his help on this, and for building such a powerful
system in the first place!
One motivation for this is to be able to build a toolchain which will
work on an aarch64 linux machine. To my knowledge, there is no existing
toolchain for an aarch64-linux builder and an aarch64-android target.
Also begin to start work on cross compilation, though that will have to
be finished later.
The patches are based on the first version of
https://reviews.llvm.org/D99484. It's very annoying to do the
back-porting but the review has uncovered nothing super major so I'm
fine sticking with what I've got.
Beyond making the outputs work, I also strove to re-sync the packages,
as they have been drifting pointlessly apart for some time.
----
Other misc notes, highly incomplete
- lvm-config-native and llvm-config are put in `dev` because they are
tools just for build time.
- Clang no longer has an lld dep. That was introduced in
db29857eb3, but if clang needs help
finding lld when it is used we should just pass it flags / put in the
resource dir. Providing it at build time increases critical path
length for no good reason.
----
A note on `nativeCC`:
`stdenv` takes tools from the previous stage, so:
1. `pkgsBuildBuild`: `(?1, x, x)`
2. `pkgsBuildBuild.stdenv.cc`: `(?0, ?1, x)`
while:
1. `pkgsBuildBuild`: `(?1, x, x)`
2. `pkgsBuildBuild.targetPackages`: `(x, x, ?2)`
3. `pkgsBuildBuild.targetPackages.stdenv.cc`: `(?1, x, x)`
In a typical build environment the toolchain will use the value of the
MACOSX_DEPLOYMENT_TARGET environment variable to determine the version
of macOS to support. When cross compiling there are two distinct
toolchains, but they will look at this single environment variable. To
avoid contamination, we always set the equivalent command line flag
which effectively disables the toolchain's internal handling.
Prior to this change, the MACOSX_DEPLOYMENT_TARGET variable was
ignored, and the toolchains always used the Nix platform
definition (`darwinMinVersion`) unless overridden with command line
arguments.
This change restores support for MACOSX_DEPLOYMENT_TARGET, and adds
nix-specific MACOSX_DEPLOYMENT_TARGET_FOR_BUILD and
MACOSX_DEPLOYMENT_TARGET_FOR_TARGET for cross compilation.
The check for including the C++ standard library headers was nested inside the
check for linking with the C++ standard library. As a result, the `-nostdlib`
flag incorrectly implied `-nostdinc++`, which made it virtually impossible to
partially link C++ objects.
Fixes build failures with clang:
clang-7: error: unknown argument: '-fPIC -target'
clang-7: error: no such file or directory: '@<(printf %qn -O2'
clang-7: error: no such file or directory: 'x86_64-apple-darwin'
Introduced by 60c5cf9cea in #112449
The `platform` field is pointless nesting: it's just stuff that happens
to be defined together, and that should be an implementation detail.
This instead makes `linux-kernel` and `gcc` top level fields in platform
configs. They join `rustc` there [all are optional], which was put there
and not in `platform` in anticipation of a change like this.
`linux-kernel.arch` in particular also becomes `linuxArch`, to match the
other `*Arch`es.
The next step after is this to combine the *specific* machines from
`lib.systems.platforms` with `lib.systems.examples`, keeping just the
"multiplatform" ones for defaulting.
When invoking a simple Ada program with `gcc` from `gnats10`, the
following warnings are shown:
```
$ gcc -c conftest.adb
gnat1: warning: command-line option ‘-Wformat=1’ is valid for C/C++/ObjC/ObjC++ but not for Ada
gnat1: warning: command-line option ‘-Wformat-security’ is valid for C/C++/ObjC/ObjC++ but not for Ada
gnat1: warning: ‘-Werror=’ argument ‘-Werror=format-security’ is not valid for Ada
$ echo $?
0
```
This is only spammy when compiling Ada programs inside a Nix derivation,
but certain configure scripts (such as the ./configure script from the
gcc that's built by coreboot's `make crossgcc` command) fail entirely
when getting that warning output.
https://nixos.wiki/wiki/Coreboot currently suggests manually running
> NIX_HARDENING_ENABLE="${NIX_HARDENING_ENABLE/ format/}" make crossgcc
… but actually teaching the nixpkgs-provided cc wrapper that `format`
isn't supported as a hardening flag seems to be the more canonical way
to do this in nixpgks.
After this, Ada programs still compile:
```
$ gcc -c conftest.adb
$ echo $?
0
```
And the compiler output is empty.
We need to set FC so that CMake and other tools can find the fortran
compiler. Also we need to limit the hardening flags since fortify and
format don’t work with fortran.
Fixes#88449
I hate the thing too even though I made it, and rather just get rid of
it. But we can't do that yet. In the meantime, this brings us more
inline with autoconf and will make it slightly easier for me to write a
pkg-config wrapper, which we need.
Regression introduced in PR #8119180729b6787. The file does not exist
somewhere during bootstrap of pkgsStatic.busybox which is used in nix
(by default).
I tested the builds.
If an empty string is passed to `-idirafter`, it breaks gcc. This commit makes
the stdenv less fragile by expanding out the shell glob and ensuring no empty
arguments get passed.
Before, we'd always use `cc = null`, and check for that. The problem is
this breaks for cross compilation to platforms that don't support a C
compiler.
It's a very subtle issue. One might think there is no problem because we
have `stdenvNoCC`, and presumably one would only build derivations that
use that. The problem is that one still wants to use tools at build-time
that are themselves built with a C compiler, and those are gotten via
"splicing". The runtime version of those deps will explode, but the
build time / `buildPackages` versions of those deps will be fine, and
splicing attempts to work this by using `builtins.tryEval` to filter out
any broken "higher priority" packages (runtime is the default and
highest priority) so that both `foo` and `foo.nativeDrv` works.
However, `tryEval` only catches certain evaluation failures (e.g.
exceptions), and not arbitrary failures (such as `cc.attr` when `cc` is
null). This means `tryEval` fails to let us use our build time deps, and
everything comes apart.
The right solution is, as usually, to get rid of splicing. Or, baring
that, to make it so `foo` never works and one has to explicitly do
`foo.*`. But that is a much larger change, and certaily one unsuitable
to be backported to stable.
Given that, we instead make an exception-throwing `cc` attribute, and
create a `hasCC` attribute for those derivations which wish to
condtionally use a C compiler: instead of doing `stdenv.cc or null ==
null` or something similar, one does `stdenv.hasCC`. This allows quering
without "tripping" the exception, while also allowing `tryEval` to work.
No platform without a C compiler is yet wired up by default. That will
be done in a following commit.
This avoids dumping -Wall warnings when they appear in framework
headers. As a result, we are closer to how regular headers are
included (via -isystem).
Also remove ccIncludeFlag lookup, this was unused & not very useful.
We want to make sure this value is explicitly set. Infering it for
every arch leads to annoying failures like:
https://hydra.nixos.org/build/92583832/
Perhaps we can enable it in the future with some smarter handling of
cc-wrapper.sh.
Adds pkgsCross.wasm32 and pkgsCross.wasm64. Use it to build Nixpkgs
with a WebAssembly toolchain.
stdenv/cross: use static overlay on isWasm
isWasm doesn’t make sense dynamically linked.
It is useful to make these dynamic and not bake them into gcc. This
means we don’t have to rebuild gcc to change these values. Instead, we
will pass cflags to gcc based on platform values. This was already
done hackily for android gcc (which is multi-target), but not for our
own gccs which are single target.
To accomplish this, we need to add a few things:
- add ‘arch’ to cpu
- add NIX_CFLAGS_COMPILE_BEFORE flag (goes before args)
- set -march everywhere
- set mcpu, mfpu, mmode, and mtune based on targetPlatform.gcc flags
cc-wrapper: only set -march when it is in the cpu type
Some architectures don’t have a good mapping of -march. For instance
POWER architecture doesn’t support the -march flag at all!
https://gcc.gnu.org/onlinedocs/gcc/RS_002f6000-and-PowerPC-Options.html#RS_002f6000-and-PowerPC-Options
this adds libc++ to the LLVM cross, giving us access to the full
Nixpkgs set. This requires 4 stages of wrapped compilers:
- Clang with no libraries
- Clang with just compiler-rt
- Clang with Libc, and compiler-rt
- Clang with Libc++, Libc, and compiler-rt
clang needs to find headers + libraries for compiling with libc++. We
need to add a libcxx argument to cc-wrapper. This means you do not
have to pass in c++ headers directly.
This resolves the last case remaining of #30670. Darwin clang++ now
works properly.
Fixes#30670
With the previous commit `propagateDoc` is now always given the correct value
(i.e. it is never set to `true` when there are no `man` and `info` outputs).
Hence, we can simply symlink the original outputs to the wrapper outputs.
Pros:
- simpler, less indirection compared to `propagated-user-env-packages`,
- uses less inodes (1 symlink, which nix then simply automatically resolves
and removes, vs. two directories and a file),
- makes direct references like "export MANPATH=${stdenv.cc.man}/share/man"
simply work.
Cons:
- I'm not aware of any.
This and the previous commit together almost completely revert commits
fde7296a47,
fa41297209, and
c981787db9.
- respect libc’s incdir and libdir
- make non-unix systems single threaded
- set LIMITS_H_TEST to false for avr
- misc updates to support new libc’s
- use multilib with avr
For threads we want to use:
- posix on unix systems
- win32 on windows
- single on everything else
For avr:
- add library directories for avrlibc
- to disable relro and bind
- avr5 should have precedence over avr3 - otherwise gcc uses the wrong one
02c09e0171 (NixOS/nixpkgs#44558) was reverted in
c981787db9 but, as it turns out, it fixed an issue
I didn't know about at the time: the values of `propagateDoc` options were
(and now again are) inconsistent with the underlying things those wrappers wrap
(see NixOS/nixpkgs#46119), which was (and now is) likely to produce more instances
of NixOS/nixpkgs#43547, if not now, then eventually as stdenv changes.
This patch (which is a simplified version of the original reverted patch) is the
simplest solution to this whole thing: it forces wrappers to directly inspect the
outputs of the things they are wrapping instead of making stdenv guess the correct
values.
In particular, this contains Firefox-related and libgcrypt updates.
Other larger rebuilds would apparently need lots of time to catch up
on Hydra, due to nontrivial rebuilds in other branches than staging.
The hack of using `crossConfig` to enforce stricter handling of
dependencies is replaced with a dedicated `strictDeps` for that purpose.
(Experience has shown that my punning was a terrible idea that made more
difficult and embarrising to teach teach.)
Now that is is clear, a few packages now use `strictDeps`, to fix
various bugs:
- bintools-wrapper and cc-wrapper
... binutils and gcc add it already anyway.
Without this it's easy to get cross-toolchain paths longer than 256
chars and nix-daemon will then fail to commit them to /nix/store on XFS.
Per @Ericson2314's suggestion [1], make it more clear that the active
hardenings are decided via whitelist; the blacklist is merely for the
debug messages.
1: 36d5ce41d4 (r133279731)
Before the code would fail silently for zero values and with some output for
empties. We now currently handle both via defaulting value to zero and making
`let` return success error code when there's no syntax error.
- All deps go on the PATH
- CC and Bintools wrappers with their host != depender's host still get their
setup hooks run.
- Environment hooks get applied to all packages
This isn't so elegent, but eases the transition on a very significant
PR.
We now have the information to properly determine the role the
cc-wrapper dependency has, by taking advantage of `offset`. No longer
use the soon-to-be-deprecated crossConfig environment variable, the
temp hack used before this change.
Factor a bintools (i.e. binutils / cctools) wrapper out of cc-wrapper. While
only LD is wrapped, the setup hook defines environment variables on behalf of
other utilites.
On non-GNU (gcc) compilers, there is no "/lib/gcc/..."
so when this is eventually expanded this is empty
resulting in an incomplete "-idirafter " that
eats the next argument:
-idirafter -B/nix/store/wamjwwdvkmhbf4f2902nhw8jxxzv0hy3-clang-wrapper-4.0.1/bin/
Certain tools, e.g. compilers, are customarily prefixed with the name of
their target platform so that multiple builds can be used at once
without clobbering each other on the PATH. I was using identifiers named
`prefix` for this purpose, but that conflicts with the standard use of
`prefix` to mean the directory where something is installed. To avoid
conflict and confusion, I renamed those to `targetPrefix`.
If a dynamic linker for target is not found the generated script fails
due to unbound variable error (due to "set -u"). Correct by specifying
default value with dynamicLinker:- and not generating ldflagsBefore if
no linker is found.
This problem was found when cross compiling to mingw32 targets
This requires some small changes in the stdenv, then working around the
weird choice LLVM made to hardcode @rpath in its install name, and then
lets us remove a ton of annoying workaround hacks in many of our Go
packages. With any luck this will mean less hackery going forward.
cc-wrapper may wrap a cc-compiler, but it doesn't need one to build
itself. (c.f. expand-response-params is a separate derivation.) This
helps avoid cycles on the cross stuff, in addition to removing a
useless dependency edge.
I could have been super careful with overrides in the stdenv to avoid
the mass rebuild, but I don't think it's worth it.
1. `crossDrv` is now the default so we don't need to worry about that in
build != host builds.
2. shell is the build time shell, so `wrapCCCross` doesn't need to
worry, as build == host.
3. `shell.shellPath` will always be appended where useful.
4. Complicated `shell == ""` logic served no purpose.
This reverts commit 0a944b345e, reversing
changes made to 61733ed6cc.
I dislike these massive stdenv changes with unclear motivation,
especially when they involve gratuitous mass renames like NIX_CC ->
NIX_BINUTILS. The previous such rename (NIX_GCC -> NIX_CC) caused
months of pain, so let's not do that again.
This becomes necessary if more wrappers besides cc-wrapper start
supporting hardening flags. Also good to make the warning into an
error.
Also ensure interface is being used right: Not as a string, not just in
bash.
ccPath is only defined below, so this condition would never be true.
Worse, that's not quite true: what if somebody happend to have `/clang`
and no sandboxing. Boy, wouldn't that be annoying to debug!