GitHub user flexibeast has been porting the html documentation from
skarnet.org to mdoc, making them available as man pages. While the
documentation is non authorative, it is certainly useful and is also
linked from skarnet.org.
buildManPages implements the common mkDerivation machinery common to all
ported man page packages / repositories.
`installCheckPhase` is mainly intended for checks that are part of the
upstream package, for our 'own' checks we prefer `passthru.tests`.
This loses running `buf --help`, but I'm not sure how much that adds
on top of `buf --version`?
skopeo 1.4.x doesn't accept --src-tls-verify as a flag to the *program*,
only as a flag to copy; we must pass it after the "copy" verb, or it
will fail with:
> FATA[0000] unknown flag: --src-tls-verify
Adapted from `pkgs/games/osu-lazer/update.sh`.
Restore the packages to a directory with `--packages`, then run
`./nuget-to-nix.sh [path to packages] > deps.nix`.
In newer versions of mingw, programs compiled with FORTIFY_SOURCE need
to link to libssp or they will have link-time errors.
gmp has been broken since @pstn updated mingw-64 in c60a0b0447
fetchzip downloads the file from specified URL, renames it to basename
of that url, and then relies on unzip to do the unpacking.
The first consequence is that this requires URL to end with proper
extension—otherwise it will fail to unpack. This is not always the
case and input-fonts workarounds this by adding “&.zip” query
parameter (which is obviously a hack and is not guaranteed to work
with every URL).
The second consequence is that basename of the url must be a valid
filename. I’ve tried to build a custom configuration of input-fonts
and I get an error from mv that the filename is too long:
> trying https://input.djr.com/build/?fontSelection=fourStyleFamily®ular=InputMonoNarrow-Regular&italic=InputMonoNarrow-Italic&bold=InputMonoNarrow-Bold&boldItalic=InputMonoNarrow-BoldItalic&a=0&g=0&i=topserif&l=serifs_round&zero=0&asterisk=height&braces=straight&preset=default&line-height=1.2&accept=I+do&email=&.zip
> % Total % Received % Xferd Average Speed Time Time Time Current
> Dload Upload Total Spent Left Speed
> 100 406k 100 406k 0 0 230k 0 0:00:01 0:00:01 --:--:-- 230k
> mv: failed to access '/build/?fontSelection=fourStyleFamily®ular=InputMonoNarrow-Regular&italic=InputMonoNarrow-Italic&bold=InputMonoNarrow-Bold&boldItalic=InputMonoNarrow-BoldItalic&a=0&g=0&i=topserif&l=serifs_round&zero=0&asterisk=height&braces=straight&preset=default&line-height=1.2&accept=I+do&email=&.zip': File name too long
We could use “name” parameter as the filename (that’s how it is used
in fetchurl). However, the previous attempt to do
so (fc01353703) was
reverted (24b5eb61eb) because of the
introduced regression—many fetchzip invocations use names without
extension (also the default name is just “source”).
This commit adds an optional “extension” parameter. If it is set,
fetchzip renames the downloaded file to “download.${extension}”
effectively solving both problems above without introducing a massive
regression.
This is a no-op for all existing packages.
Tested by updating my NixOS setup + the extra inputs-fonts
configuration mentioned above + tons of unstable emacs packages after
a nix-collect-garbage (3Gb downloaded) with this patch applied.
GPRbuild is a multi language build system developed by AdaCore which
is mostly used for build Ada-related projects using GNAT.
Since GPRbuild is used to build itself and its dependency library
XML/Ada we first build a bootstrap version of it using the provided
bash build script bootstrap.sh as the gprbuild-boot derivation.
gprbuild-boot is then used to build xmlada and the proper gprbuild
derivation.
GPRbuild has its own search path mechanism via GPR_PROJECT_PATH which
we address via a setupHook. It currently works quite similar to the
pkg-config one: It accumulates all inputs into GPR_PROJECT_PATH,
GPR_PROJECT_PATH_FOR_BUILD etc. However this is quite limited at the
moment as we don't have a gprbuild wrapper yet which understands the
_FOR_BUILD suffix. However, we'll need to address this in the future
as it is currently basically impossible to test since the distinction
only affects cross-compilation, but it is not possible to build a GNAT
cross-compiler in nixpkgs at the moment (I'm working on changing that,
however).
Another issue we had to solve was GPRbuild not finding the right GNAT
via its gprconfig tool: GPRbuild has a knowledge base with compiler
definitions which run some checks and collect info about binaries
which are in PATH. In the end the first compiler in PATH that supports
the desired language is selected.
We want GPRbuild to discover our wrapped GNAT since the unwrapped one
is incapable of producing working binaries since it won't find the
crt*.o objects distributed with libc. GPRbuild however needs to find
the Ada runtime distributed with GNAT which is not part of the wrapper
derivation, so it will skip the wrapper and select the unwrapped GNAT.
Symlinking the unwrapped's lib directory into the wrapper fixes this
problem, but breaks linking in some cases (e. g. when linking against
OMP from gcc, the runtime variant will shadow the problem dynamic lib
from buildInputs). Additionally it uses gnatls as an indicator it has
found GNAT which is not part of the wrapper.
The solution we opted to adopt here is to install a custom compiler
description into gprbuild's knowledge base which properly detects the
nixpkgs GNAT wrapper: It uses gnatmake to detect GNAT instead of
gnatls and discovers the runtime via a symlink we add to
`$out/nix-support`. This additional definition is enough to properly
detect GNAT, since the plain wrapped gcc detection works out of the
box. It may, however, be necessary to add special definitions for
other languages in the future where gprbuild also needs to discover
the runtime.
One future improvement would be to install libgpr into a separate
output or split it into a separate derivation (which would require to
link gprbuild statically always since otherwise we end up with a
cyclical dependency).
near the end of 2019, the default Cargo.lock format was changed to
[[package]]
checksum = ...
This is what importCargoLock assumes. If the crate had not been `cargo
update`'d with a more recent toolchain than the one with the new
format as default, importCargoLock would fail when trying to access
pkg.checksum.
I ran into such a case (shamefully, in my own crate) and it took me a
while to figure out what was going on, so here is an assert with a
more user friendly message and a hint.
At least for now. Such changes are risky (we have very many packages),
and apparently it needs more testing/review without blocking other
changes.
This reverts the whole range 4d0e3984918^..8752c327377,
except for one commit that got reverted in 6f239d7309 already.
(that MR didn't even get its merge commit)
* bintools: disable -pie when -r or -Ur are used
ld’s -r allows you to partially link object files. When -pie is passed with -r, though, we get:
ld: -r and -pie may not be used together
Most build systems are intelligent enough to pass -no-pie before -r, but we might as well support those that
don’t.
Note: -pie is not enabled by default in Nixpkgs, but it is when you are using musl. So this solution is really
only useful for musl toolchains.
* bintools-wrapper: Add incremental -i check for pie
It's hugely inefficient as we can't use shallow cloning (--depth=1).
This has been tested and adapted for quite a few hosts fetchgit is used on in
Nixpkgs. For those where fetching the hash directly doesn't work (most notably
git.savannah.gnu.org), we simply fall back to the old method.
The NixOS pipewire module places its alsa compatiblity configuration in
/etc/alsa/conf.d/ instead of /etc/asound.conf. This commit enables
applications running in a bubblewrap fhs environment to use alsa on
systems running pipewire.
According to rustc implementation[1], `-C incremental=no` enables
incremental builds with directory name `no`. This patch removes the
`-C incremental` argument to disable incremental builds.
[1]: ee86f96ba1/compiler/rustc_session/src/options.rs (L918-L919)
I think this is due an update. I've chosen to update to the latest
version that has been merged into Melpa.
Unfortunately we now need to hack around it trying to run VCS
commands.
My Emacs configuration with thirty-something leaf packages seems fine
after the rebuild.
skopeo will disable the progress bar if it detects that stdout isn't a
TTY - in order to make it think that stdout _isn't_ a TTY and therefore
avoid it printing a lot of "…" on separate lines, we pipe the output
through cat.
This changes the output from:
…
…
…
…
…
…
to the eminently more useful and less spammy:
Getting image source signatures
Copying blob sha256:[snip]
Copying blob sha256:[snip]
Copying blob sha256:[snip]
Copying config sha256:[snip]
Writing manifest to image destination
Storing signatures
appimage-exec.sh parses its arguments with getopts, so we need to
delimit arguments intended for the wrapped executable with ‘--’, in
case some of them begin with ‘-’.
Without this fix, a wrapped application like Zulip Desktop can’t be
opened the normal way using the .desktop file, which includes
‘Exec=zulip --no-sandbox %U’ (as per the electron-builder default):
$ gtk-launch zulip.desktop
/usr/bin/appimage-exec.sh: illegal option -- -
Usage: appimage-run [appimage-run options] <AppImage> [AppImage options]
[…]
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
This change allows ELPA packages to have their src attribute updated
by overrideAttrs. Without this change the installPhase references the
original src attribute and overriding is not possible.
Before this change, it was not possible to use string paths,
because then the `types.str.check` would succeed. So the only paths that
could be used were ones from the local filesystem via e.g.
`./some/path`.
We can easily fix this by using `types.path.check` instead to determine
whether we are dealing with a path.
This notably also allows using Nix-fetched sources as the content, e.g.
`fetchFromGitHub { ... } + "/some/file"`
This fixes#126344, specifically with the goal of enabling overriding the
checkPhase argument. See `design notes` at the end for details.
This allows among other things, enabling bash extension for the `checkPhase`.
Previously using such bash extensions was prohibited by the `writeShellScript`
code because there was no way to enable the extension in the checker.
As an example:
```nix
(writeShellScript "foo" ''
shopt -s extglob
echo @(foo|bar)
'').overrideAttrs (old: {
checkPhase = ''
# use subshell to preserve outer environment
(
export BASHOPTS
shopt -s extglob
${old.checkPhase}
)
'';
})
```
This commit also adds tests for this feature to `pkgs/tests/default.nix`,
under `trivial-overriding`. The test code is located at
`pkgs/build-support/trivial-builders/test-overriding.nix`.
Design notes:
-------------
Per discussion with @sternenseemann, the original approach of just wrapping
`writeTextFile` in `makeOverridable` had the issue that combined with `callPackage`
in the following form, would shadow the `.override` attribute of the `writeTextFile`:
```nix
with import <nixpkgs>;
callPackage ({writeShellScript}: writeShellScript "foo" "echo foo")
```
A better approach can be seen in this commit, where `checkPhase` is moved
from an argument of `writeTextFile`, which is substituted into `buildCommand`,
into an `mkDerivation` argument, which is substituted from the environment
and `eval`-ed. (see the source)
This way we can simple use `.overideAttrs` as usual, and this also makes
`checkPhase` a bit more conformant to `mkDerivation` naming, with respect to
phases generally being overridable attrs.
Co-authored-by: sterni <sternenseemann@systemli.org>
Co-authored-by: Naïm Favier <n@monade.li>
If run as root we were leaking mounts to the parent namespace,
which lead to an error when removing the temporary mountroot.
To fix this we remount the whole tree as private as soon as we created
the new mountenamespace.
For https://github.com/NixOS/nixpkgs/pull/125211 I tried to test
the fetcher with
nix-build -A dockerTools.examples.nixFromDockerHub --option substitute false
But it failed. I haven't figured out the cause, but the outputs
match, so it's probably the hashing method (flat/recursive) that
changed at some point. (The names did match.)
This change introduces the cargoLock argument to buildRustPackage,
which can be used in place of cargo{Sha256,Hash} or cargoVendorDir. It
uses the importCargoLock function to build the vendor
directory. Differences compared to cargo{Sha256,Hash}:
- Requires a Cargo.lock file.
- Does not require a Cargo hash.
- Retrieves all dependencies as fixed-output derivations.
This makes buildRustPackage much easier to use as part of a Rust
project, since it does not require updating cargo{Sha256,Hash} for
every change to the lock file.
This function can be used to create an output path that is a cargo
vendor directory. In contrast to e.g. fetchCargoTarball all the
dependent crates are fetched using fixed-output derivations. The
hashes for the fixed-output derivations are gathered from the
Cargo.lock file.
Usage is very simple, e.g.:
importCargoLock {
lockFile = ./Cargo.lock;
}
would use the lockfile from the current directory.
The implementation of this function is based on Eelco Dolstra's
import-cargo:
https://github.com/edolstra/import-cargo/blob/master/flake.nix
Compared to upstream:
- We use fetchgit in place of builtins.fetchGit.
- Sync to current cargo vendoring.
Adds includeStorePaths, allowing the omission of the store paths.
You generally want to leave it on, but tooling may disable this
to insert the store paths more efficiently via other means, such
as bind mounting the host store.
Add a small utility script which securely replaces secrets in
files. Doing this with `sed`, `replace-literal` or similar utilities
leaks the secrets through the spawned process' `/proc/<pid>/cmdline` file.
> There is an issue in the test added by #123111.
> [it] introduces a dependency on the contents of nixpkgs,
> making every change evaluate with a different hash.
Previously, mangleVarList would be used which would concatenate the
variables using a space as a separator. Paths are however separated by
`:` in PKG_CONFIG_PATH leading to entries being broken.
This is fixed by introducing mangleVarListGeneric which allows us to
specify the desired separator.
Reproducer for the issue prior to this change:
$ nix-shell -A pkgsLLVM.wayland
[nix-shell] $ pkg-config --libs expat
Package expat was not found in the pkg-config search path.
Perhaps you should add the directory containing `expat.pc'
to the PKG_CONFIG_PATH environment variable
No package 'expat' found
$ printf 'Host: %s\nBuild: %s' $PKG_CONFIG_PATH $PKG_CONFIG_PATH_FOR_BUILD
Host: /nix/store/5h308a4ab8w7prcp8iflh5pnl78mayi2-expat-2.2.10-x86_64-unknown-linux-gnu-dev/lib/pkgconfig:/nix/store/z3y9ska2h4l1map25m195iq577g7g3gz-libxml2-x86_64-unknown-linux-gnu-2.9.12-dev/lib/pkgconfig:/nix/store/lbz5m1s0r7zn0cxvl21czfspli6ribzb-zlib-1.2.11-x86_64-unknown-linux-gnu-dev/lib/pkgconfig:/nix/store/rfhvp8r8n3ygpzh8j0l34lk8hwwi3z0h-libffi-3.3-x86_64-unknown-linux-gnu-dev/lib/pkgconfig
Build: /nix/store/dw11ywy7qwfz53qisz0dggbgix88jah2-wayland-1.19.0-bin/lib/pkgconfig
strace reveals the issue:
stat("/nix/store/dw11ywy7qwfz53qisz0dggbgix88jah2-wayland-1.19.0-bin/lib/pkgconfig /nix/store/5h308a4ab8w7prcp8iflh5pnl78mayi2-expat-2.2.10-x86_64-unknown-linux-gnu-dev/lib/pkgconfig/expat-uninstalled.pc", 0x7fff49829fa0) = -1 ENOENT (No such file or directory)
In the pkg-config wrapper $PKG_CONFIG_PATH_FOR_BUILD and
$PKG_CONFIG_PATH are concatenated with a space which leads to two paths
being messed up. This issue likely only affects native cross
compilation.
This will begin the process of breaking up the `useLLVM` monolith. That
is good in general, but I hope will be good for NetBSD and Darwin in
particular.
Co-authored-by: sterni <sternenseemann@systemli.org>
The distinction between the inputs doesn't really make sense in the
mkShell context. Technically speaking, we should be using the
nativeBuildInputs most of the time.
So in order to make this function more beginner-friendly, add "packages"
as an attribute, that maps to nativeBuildInputs.
This commit also updates all the uses in nixpkgs.
This PR adds a new aarch64 android toolchain, which leverages the
existing crossSystem infrastructure and LLVM builders to generate a
working toolchain with minimal prebuilt components.
The only thing that is prebuilt is the bionic libc. This is because it
is practically impossible to compile bionic outside of an AOSP tree. I
tried and failed, braver souls may prevail. For now I just grab the
relevant binaries from https://android.googlesource.com/.
I also grab the msm kernel sources from there to generate headers. I've
included a minor patch to the existing kernel-headers derivation in
order to expose an internal function.
Everything else, from binutils up, is using stock code. Many thanks to
@Ericson2314 for his help on this, and for building such a powerful
system in the first place!
One motivation for this is to be able to build a toolchain which will
work on an aarch64 linux machine. To my knowledge, there is no existing
toolchain for an aarch64-linux builder and an aarch64-android target.
Also begin to start work on cross compilation, though that will have to
be finished later.
The patches are based on the first version of
https://reviews.llvm.org/D99484. It's very annoying to do the
back-porting but the review has uncovered nothing super major so I'm
fine sticking with what I've got.
Beyond making the outputs work, I also strove to re-sync the packages,
as they have been drifting pointlessly apart for some time.
----
Other misc notes, highly incomplete
- lvm-config-native and llvm-config are put in `dev` because they are
tools just for build time.
- Clang no longer has an lld dep. That was introduced in
db29857eb3, but if clang needs help
finding lld when it is used we should just pass it flags / put in the
resource dir. Providing it at build time increases critical path
length for no good reason.
----
A note on `nativeCC`:
`stdenv` takes tools from the previous stage, so:
1. `pkgsBuildBuild`: `(?1, x, x)`
2. `pkgsBuildBuild.stdenv.cc`: `(?0, ?1, x)`
while:
1. `pkgsBuildBuild`: `(?1, x, x)`
2. `pkgsBuildBuild.targetPackages`: `(x, x, ?2)`
3. `pkgsBuildBuild.targetPackages.stdenv.cc`: `(?1, x, x)`
In a typical build environment the toolchain will use the value of the
MACOSX_DEPLOYMENT_TARGET environment variable to determine the version
of macOS to support. When cross compiling there are two distinct
toolchains, but they will look at this single environment variable. To
avoid contamination, we always set the equivalent command line flag
which effectively disables the toolchain's internal handling.
Prior to this change, the MACOSX_DEPLOYMENT_TARGET variable was
ignored, and the toolchains always used the Nix platform
definition (`darwinMinVersion`) unless overridden with command line
arguments.
This change restores support for MACOSX_DEPLOYMENT_TARGET, and adds
nix-specific MACOSX_DEPLOYMENT_TARGET_FOR_BUILD and
MACOSX_DEPLOYMENT_TARGET_FOR_TARGET for cross compilation.
Instead of always supplying flags, apply the flags as defaults. Use
clang's native flags instead of lifting the linker flags from binutils
with `-Wl,`.
If a project is using clang to drive linking, make clang do the right
thing with MACOSX_DEPLOYMENT_TARGET. This can be overridden by command
line arguments. This will cause modern clang to pass
`-platform_version 10.12 0.0.0`, since it doesn't know about the SDK
settings. Older versions of clang will pass down `-macos_version_min`
flags with no sdk version.
At the linker layer, apply a default value for anything left
ambiguous. If nothing is specified, pass a full
`-platform_version`. If only `-macos_version_min` is specified, then
lock down the sdk_version explicitly with `-sdk_version`. If a min
version and sdk version is passed, do nothing.