Fixes https://github.com/dhall-lang/dhall-haskell/issues/2267
`pkgs.dhallToNix` currently fails when a Dhall package is
interpolated into the input source code, like this:
```nix
let
pkgs = import <nixpkgs> { };
f = { buildDhallPackage }: buildDhallPackage {
name = "not";
code = "λ(x : Bool) → x == False";
source = true;
};
not = pkgs.dhallPackages.callPackage f {};
in
pkgs.dhallToNix "${not}/source.dhall True"
```
This is because `dhallToNix` was using `builtins.toFile`, which
does not permit inputs with interpolated derivations. However,
`pkgs.writeText` does not have this limitation, so we can switch
to using that instead.
Apparently, a non-existent nsswitch.conf causes a very misleading host
resolution, differing from the defaults people are used to.
According to
https://github.com/golang/go/issues/22846#issuecomment-346377144, glibc
says the default is "dns [!UNAVAIL=return] files".
This means, `/etc/hosts` isn't really honored, causing all sorts of
unexpected behaviour.
Let's prevent this, and first ask `/etc/hosts` before querying DNS, like
we do on NixOS too.
The reason for this change is explained in the long comment I added.
Here's a simple example of the problem:
let
pkgs = import <nixpkgs> { crossSystem.system = "aarch64-linux"; };
in
pkgs.callPackage ({ stdenv, s6-rc }: stdenv.mkDerivation {
name = "s6-rc-compiled";
nativeBuildInputs = [ s6-rc ];
buildCommand = ''
mkdir in
s6-rc-compile $out in
'';
}) {}
We're cross compiling for aarch64 here, so we'd expect the scripts
generated by this derivation to be things we could run on aarch64.
But when I build this on my x86_64 machine, without this change
applied, $out/servicedirs/s6rc-oneshot-runner/run gets generated full
of references to x86_64 non-cross store paths for execline, s6, and
s6-rc.
With this change applied, the scripts generated by the above
expression now refer to the cross-compiled aarch64 store paths for
execline, s6, and s6-rc.
- Reuse build phase from the `buildDunePackage` function.
- Only install the package that was just built (useful for monorepo support).
- Introduces `opam-name` to override the default package name to build with Dune.
GitHub user flexibeast has been porting the html documentation from
skarnet.org to mdoc, making them available as man pages. While the
documentation is non authorative, it is certainly useful and is also
linked from skarnet.org.
buildManPages implements the common mkDerivation machinery common to all
ported man page packages / repositories.
`installCheckPhase` is mainly intended for checks that are part of the
upstream package, for our 'own' checks we prefer `passthru.tests`.
This loses running `buf --help`, but I'm not sure how much that adds
on top of `buf --version`?
skopeo 1.4.x doesn't accept --src-tls-verify as a flag to the *program*,
only as a flag to copy; we must pass it after the "copy" verb, or it
will fail with:
> FATA[0000] unknown flag: --src-tls-verify
Adapted from `pkgs/games/osu-lazer/update.sh`.
Restore the packages to a directory with `--packages`, then run
`./nuget-to-nix.sh [path to packages] > deps.nix`.
In newer versions of mingw, programs compiled with FORTIFY_SOURCE need
to link to libssp or they will have link-time errors.
gmp has been broken since @pstn updated mingw-64 in c60a0b0447
fetchzip downloads the file from specified URL, renames it to basename
of that url, and then relies on unzip to do the unpacking.
The first consequence is that this requires URL to end with proper
extension—otherwise it will fail to unpack. This is not always the
case and input-fonts workarounds this by adding “&.zip” query
parameter (which is obviously a hack and is not guaranteed to work
with every URL).
The second consequence is that basename of the url must be a valid
filename. I’ve tried to build a custom configuration of input-fonts
and I get an error from mv that the filename is too long:
> trying https://input.djr.com/build/?fontSelection=fourStyleFamily®ular=InputMonoNarrow-Regular&italic=InputMonoNarrow-Italic&bold=InputMonoNarrow-Bold&boldItalic=InputMonoNarrow-BoldItalic&a=0&g=0&i=topserif&l=serifs_round&zero=0&asterisk=height&braces=straight&preset=default&line-height=1.2&accept=I+do&email=&.zip
> % Total % Received % Xferd Average Speed Time Time Time Current
> Dload Upload Total Spent Left Speed
> 100 406k 100 406k 0 0 230k 0 0:00:01 0:00:01 --:--:-- 230k
> mv: failed to access '/build/?fontSelection=fourStyleFamily®ular=InputMonoNarrow-Regular&italic=InputMonoNarrow-Italic&bold=InputMonoNarrow-Bold&boldItalic=InputMonoNarrow-BoldItalic&a=0&g=0&i=topserif&l=serifs_round&zero=0&asterisk=height&braces=straight&preset=default&line-height=1.2&accept=I+do&email=&.zip': File name too long
We could use “name” parameter as the filename (that’s how it is used
in fetchurl). However, the previous attempt to do
so (fc01353703) was
reverted (24b5eb61eb) because of the
introduced regression—many fetchzip invocations use names without
extension (also the default name is just “source”).
This commit adds an optional “extension” parameter. If it is set,
fetchzip renames the downloaded file to “download.${extension}”
effectively solving both problems above without introducing a massive
regression.
This is a no-op for all existing packages.
Tested by updating my NixOS setup + the extra inputs-fonts
configuration mentioned above + tons of unstable emacs packages after
a nix-collect-garbage (3Gb downloaded) with this patch applied.
GPRbuild is a multi language build system developed by AdaCore which
is mostly used for build Ada-related projects using GNAT.
Since GPRbuild is used to build itself and its dependency library
XML/Ada we first build a bootstrap version of it using the provided
bash build script bootstrap.sh as the gprbuild-boot derivation.
gprbuild-boot is then used to build xmlada and the proper gprbuild
derivation.
GPRbuild has its own search path mechanism via GPR_PROJECT_PATH which
we address via a setupHook. It currently works quite similar to the
pkg-config one: It accumulates all inputs into GPR_PROJECT_PATH,
GPR_PROJECT_PATH_FOR_BUILD etc. However this is quite limited at the
moment as we don't have a gprbuild wrapper yet which understands the
_FOR_BUILD suffix. However, we'll need to address this in the future
as it is currently basically impossible to test since the distinction
only affects cross-compilation, but it is not possible to build a GNAT
cross-compiler in nixpkgs at the moment (I'm working on changing that,
however).
Another issue we had to solve was GPRbuild not finding the right GNAT
via its gprconfig tool: GPRbuild has a knowledge base with compiler
definitions which run some checks and collect info about binaries
which are in PATH. In the end the first compiler in PATH that supports
the desired language is selected.
We want GPRbuild to discover our wrapped GNAT since the unwrapped one
is incapable of producing working binaries since it won't find the
crt*.o objects distributed with libc. GPRbuild however needs to find
the Ada runtime distributed with GNAT which is not part of the wrapper
derivation, so it will skip the wrapper and select the unwrapped GNAT.
Symlinking the unwrapped's lib directory into the wrapper fixes this
problem, but breaks linking in some cases (e. g. when linking against
OMP from gcc, the runtime variant will shadow the problem dynamic lib
from buildInputs). Additionally it uses gnatls as an indicator it has
found GNAT which is not part of the wrapper.
The solution we opted to adopt here is to install a custom compiler
description into gprbuild's knowledge base which properly detects the
nixpkgs GNAT wrapper: It uses gnatmake to detect GNAT instead of
gnatls and discovers the runtime via a symlink we add to
`$out/nix-support`. This additional definition is enough to properly
detect GNAT, since the plain wrapped gcc detection works out of the
box. It may, however, be necessary to add special definitions for
other languages in the future where gprbuild also needs to discover
the runtime.
One future improvement would be to install libgpr into a separate
output or split it into a separate derivation (which would require to
link gprbuild statically always since otherwise we end up with a
cyclical dependency).