Functions reference
The nixpkgs repository has several utility functions to manipulate Nix
expressions.
Overriding
Sometimes one wants to override parts of nixpkgs, e.g.
derivation attributes, the results of derivations or even the whole package
set.
<pkg>.override
The function override is usually available for all the
derivations in the nixpkgs expression (pkgs).
It is used to override the arguments passed to a function.
Example usages:
pkgs.foo.override { arg1 = val1; arg2 = val2; ... }
import pkgs.path { overlays = [ (self: super: {
foo = super.foo.override { barSupport = true ; };
})]};
mypkg = pkgs.callPackage ./mypkg.nix {
mydep = pkgs.mydep.override { ... };
}
In the first example, pkgs.foo is the result of a
function call with some default arguments, usually a derivation. Using
pkgs.foo.override will call the same function with the
given new arguments.
<pkg>.overrideAttrs
The function overrideAttrs allows overriding the
attribute set passed to a stdenv.mkDerivation call,
producing a new derivation based on the original one. This function is
available on all derivations produced by the
stdenv.mkDerivation function, which is most packages in
the nixpkgs expression pkgs.
Example usage:
helloWithDebug = pkgs.hello.overrideAttrs (oldAttrs: rec {
separateDebugInfo = true;
});
In the above example, the separateDebugInfo attribute is
overridden to be true, thus building debug info for
helloWithDebug, while all other attributes will be
retained from the original hello package.
The argument oldAttrs is conventionally used to refer to
the attr set originally passed to stdenv.mkDerivation.
Note that separateDebugInfo is processed only by the
stdenv.mkDerivation function, not the generated, raw
Nix derivation. Thus, using overrideDerivation will not
work in this case, as it overrides only the attributes of the final
derivation. It is for this reason that overrideAttrs
should be preferred in (almost) all cases to
overrideDerivation, i.e. to allow using
sdenv.mkDerivation to process input arguments, as well
as the fact that it is easier to use (you can use the same attribute names
you see in your Nix code, instead of the ones generated (e.g.
buildInputs vs nativeBuildInputs,
and involves less typing.
<pkg>.overrideDerivation
You should prefer overrideAttrs in almost all cases,
see its documentation for the reasons why.
overrideDerivation is not deprecated and will continue
to work, but is less nice to use and does not have as many abilities as
overrideAttrs.
Do not use this function in Nixpkgs as it evaluates a Derivation before
modifying it, which breaks package abstraction and removes error-checking
of function arguments. In addition, this evaluation-per-function
application incurs a performance penalty, which can become a problem if
many overrides are used. It is only intended for ad-hoc customisation,
such as in ~/.config/nixpkgs/config.nix.
The function overrideDerivation creates a new derivation
based on an existing one by overriding the original's attributes with the
attribute set produced by the specified function. This function is
available on all derivations defined using the
makeOverridable function. Most standard
derivation-producing functions, such as
stdenv.mkDerivation, are defined using this function,
which means most packages in the nixpkgs expression,
pkgs, have this function.
Example usage:
mySed = pkgs.gnused.overrideDerivation (oldAttrs: {
name = "sed-4.2.2-pre";
src = fetchurl {
url = ftp://alpha.gnu.org/gnu/sed/sed-4.2.2-pre.tar.bz2;
sha256 = "11nq06d131y4wmf3drm0yk502d2xc6n5qy82cg88rb9nqd2lj41k";
};
patches = [];
});
In the above example, the name, src,
and patches of the derivation will be overridden, while
all other attributes will be retained from the original derivation.
The argument oldAttrs is used to refer to the attribute
set of the original derivation.
A package's attributes are evaluated *before* being modified by the
overrideDerivation function. For example, the
name attribute reference in url =
"mirror://gnu/hello/${name}.tar.gz"; is filled-in *before* the
overrideDerivation function modifies the attribute set.
This means that overriding the name attribute, in this
example, *will not* change the value of the url
attribute. Instead, we need to override both the name
*and* url attributes.
lib.makeOverridable
The function lib.makeOverridable is used to make the
result of a function easily customizable. This utility only makes sense for
functions that accept an argument set and return an attribute set.
Example usage:
f = { a, b }: { result = a+b; };
c = lib.makeOverridable f { a = 1; b = 2; };
The variable c is the value of the f
function applied with some default arguments. Hence the value of
c.result is 3, in this example.
The variable c however also has some additional
functions, like c.override which
can be used to override the default arguments. In this example the value of
(c.override { a = 4; }).result is 6.
Generators
Generators are functions that create file formats from nix data structures,
e. g. for configuration files. There are generators available for:
INI, JSON and YAML
All generators follow a similar call interface: generatorName
configFunctions data, where configFunctions is an
attrset of user-defined functions that format nested parts of the content.
They each have common defaults, so often they do not need to be set
manually. An example is mkSectionName ? (name: libStr.escape [ "[" "]"
] name) from the INI generator. It receives the
name of a section and sanitizes it. The default
mkSectionName escapes [ and
] with a backslash.
Generators can be fine-tuned to produce exactly the file format required by
your application/service. One example is an INI-file format which uses
: as separator, the strings
"yes"/"no" as boolean values and
requires all string values to be quoted:
with lib;
let
customToINI = generators.toINI {
# specifies how to format a key/value pair
mkKeyValue = generators.mkKeyValueDefault {
# specifies the generated string for a subset of nix values
mkValueString = v:
if v == true then ''"yes"''
else if v == false then ''"no"''
else if isString v then ''"${v}"''
# and delegats all other values to the default generator
else generators.mkValueStringDefault {} v;
} ":";
};
# the INI file can now be given as plain old nix values
in customToINI {
main = {
pushinfo = true;
autopush = false;
host = "localhost";
port = 42;
};
mergetool = {
merge = "diff3";
};
}
This will produce the following INI file as nix string:
[main]
autopush:"no"
host:"localhost"
port:42
pushinfo:"yes"
str\:ange:"very::strange"
[mergetool]
merge:"diff3"
Nix store paths can be converted to strings by enclosing a derivation
attribute like so: "${drv}".
Detailed documentation for each generator can be found in
lib/generators.nix.
Debugging Nix Expressions
Nix is a unityped, dynamic language, this means every value can potentially
appear anywhere. Since it is also non-strict, evaluation order and what
ultimately is evaluated might surprise you. Therefore it is important to be
able to debug nix expressions.
In the lib/debug.nix file you will find a number of
functions that help (pretty-)printing values while evaluation is runnnig.
You can even specify how deep these values should be printed recursively,
and transform them on the fly. Please consult the docstrings in
lib/debug.nix for usage information.
buildFHSUserEnvbuildFHSUserEnv provides a way to build and run
FHS-compatible lightweight sandboxes. It creates an isolated root with bound
/nix/store, so its footprint in terms of disk space
needed is quite small. This allows one to run software which is hard or
unfeasible to patch for NixOS -- 3rd-party source trees with FHS
assumptions, games distributed as tarballs, software with integrity checking
and/or external self-updated binaries. It uses Linux namespaces feature to
create temporary lightweight environments which are destroyed after all
child processes exit, without root user rights requirement. Accepted
arguments are:
name
Environment name.
targetPkgs
Packages to be installed for the main host's architecture (i.e. x86_64 on
x86_64 installations). Along with libraries binaries are also installed.
multiPkgs
Packages to be installed for all architectures supported by a host (i.e.
i686 and x86_64 on x86_64 installations). Only libraries are installed by
default.
extraBuildCommands
Additional commands to be executed for finalizing the directory
structure.
extraBuildCommandsMulti
Like extraBuildCommands, but executed only on multilib
architectures.
extraOutputsToInstall
Additional derivation outputs to be linked for both target and
multi-architecture packages.
extraInstallCommands
Additional commands to be executed for finalizing the derivation with
runner script.
runScript
A command that would be executed inside the sandbox and passed all the
command line arguments. It defaults to bash.
One can create a simple environment using a shell.nix
like that:
{} }:
(pkgs.buildFHSUserEnv {
name = "simple-x11-env";
targetPkgs = pkgs: (with pkgs;
[ udev
alsaLib
]) ++ (with pkgs.xorg;
[ libX11
libXcursor
libXrandr
]);
multiPkgs = pkgs: (with pkgs;
[ udev
alsaLib
]);
runScript = "bash";
}).env
]]>
Running nix-shell would then drop you into a shell with
these libraries and binaries available. You can use this to run
closed-source applications which expect FHS structure without hassles:
simply change runScript to the application path, e.g.
./bin/start.sh -- relative paths are supported.
pkgs.dockerToolspkgs.dockerTools is a set of functions for creating and
manipulating Docker images according to the
Docker Image Specification v1.2.0 . Docker itself is not used to
perform any of the operations done by these functions.
The dockerTools API is unstable and may be subject to
backwards-incompatible changes in the future.
buildImage
This function is analogous to the docker build command,
in that can used to build a Docker-compatible repository tarball containing
a single image with one or multiple layers. As such, the result is suitable
for being loaded in Docker with docker load.
The parameters of buildImage with relative example
values are described below:
Docker build
buildImage {
name = "redis";
tag = "latest";
fromImage = someBaseImage;
fromImageName = null;
fromImageTag = "latest";
contents = pkgs.redis;
runAsRoot = ''
#!${stdenv.shell}
mkdir -p /data
'';
config = {
Cmd = [ "/bin/redis-server" ];
WorkingDir = "/data";
Volumes = {
"/data" = {};
};
};
}
The above example will build a Docker image redis/latest
from the given base image. Loading and running this image in Docker results
in redis-server being started automatically.
name specifies the name of the resulting image. This
is the only required argument for buildImage.
tag specifies the tag of the resulting image. By
default it's null, which indicates that the nix output
hash will be used as tag.
fromImage is the repository tarball containing the
base image. It must be a valid Docker image, such as exported by
docker save. By default it's null,
which can be seen as equivalent to FROM scratch of a
Dockerfile.
fromImageName can be used to further specify the base
image within the repository, in case it contains multiple images. By
default it's null, in which case
buildImage will peek the first image available in the
repository.
fromImageTag can be used to further specify the tag of
the base image within the repository, in case an image contains multiple
tags. By default it's null, in which case
buildImage will peek the first tag available for the
base image.
contents is a derivation that will be copied in the
new layer of the resulting image. This can be similarly seen as
ADD contents/ / in a Dockerfile.
By default it's null.
runAsRoot is a bash script that will run as root in an
environment that overlays the existing layers of the base image with the
new resulting layer, including the previously copied
contents derivation. This can be similarly seen as
RUN ... in a Dockerfile.
Using this parameter requires the kvm device to be
available.
config is used to specify the configuration of the
containers that will be started off the built image in Docker. The
available options are listed in the
Docker Image Specification v1.2.0 .
After the new layer has been created, its closure (to which
contents, config and
runAsRoot contribute) will be copied in the layer
itself. Only new dependencies that are not already in the existing layers
will be copied.
At the end of the process, only one new single layer will be produced and
added to the resulting image.
The resulting repository will only list the single image
image/tag. In the case of
it would be
redis/latest.
It is possible to inspect the arguments with which an image was built using
its buildArgs attribute.
If you see errors similar to getProtocolByName: does not exist
(no such protocol name: tcp) you may need to add
pkgs.iana-etc to contents.
If you see errors similar to Error_Protocol ("certificate has
unknown CA",True,UnknownCa) you may need to add
pkgs.cacert to contents.
Impurely Defining a Docker Layer's Creation Date
By default buildImage will use a static
date of one second past the UNIX Epoch. This allows
buildImage to produce binary reproducible
images. When listing images with docker list
images, the newly created images will be listed like
this:
You can break binary reproducibility but have a sorted,
meaningful CREATED column by setting
created to now.
and now the Docker CLI will display a reasonable date and
sort the images as expected:
however, the produced images will not be binary reproducible.
buildLayeredImage
Create a Docker image with many of the store paths being on their own layer
to improve sharing between images.
name
The name of the resulting image.
tagoptional
Tag of the generated image.
Default: the output path's hash
contentsoptional
Top level paths in the container. Either a single derivation, or a list
of derivations.
Default:[]configoptional
Run-time configuration of the container. A full list of the options are
available at in the
Docker Image Specification v1.2.0 .
Default:{}createdoptional
Date and time the layers were created. Follows the same
now exception supported by
buildImage.
Default:1970-01-01T00:00:01ZmaxLayersoptional
Maximum number of layers to create.
Default:24Behavior of contents in the final image
Each path directly listed in contents will have a
symlink in the root of the image.
For example:
will create symlinks for all the paths in the hello
package:
/nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/bin/hello
/share/info/hello.info -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/share/info/hello.info
/share/locale/bg/LC_MESSAGES/hello.mo -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/share/locale/bg/LC_MESSAGES/hello.mo
]]>Automatic inclusion of config references
The closure of config is automatically included in the
closure of the final image.
This allows you to make very simple Docker images with very little code.
This container will start up and run hello:
Adjusting maxLayers
Increasing the maxLayers increases the number of layers
which have a chance to be shared between different images.
Modern Docker installations support up to 128 layers, however older
versions support as few as 42.
If the produced image will not be extended by other Docker builds, it is
safe to set maxLayers to 128.
However it will be impossible to extend the image further.
The first (maxLayers-2) most "popular" paths will have
their own individual layers, then layer #maxLayers-1
will contain all the remaining "unpopular" paths, and finally layer
#maxLayers will contain the Image configuration.
Docker's Layers are not inherently ordered, they are content-addressable
and are not explicitly layered until they are composed in to an Image.
pullImage
This function is analogous to the docker pull command,
in that can be used to pull a Docker image from a Docker registry. By
default Docker Hub is
used to pull images.
Its parameters are described in the example below:
Docker pull
pullImage {
imageName = "nixos/nix";
imageDigest = "sha256:20d9485b25ecfd89204e843a962c1bd70e9cc6858d65d7f5fadc340246e2116b";
finalImageTag = "1.11";
sha256 = "0mqjy3zq2v6rrhizgb9nvhczl87lcfphq9601wcprdika2jz7qh8";
os = "linux";
arch = "x86_64";
}
imageName specifies the name of the image to be
downloaded, which can also include the registry namespace (e.g.
nixos). This argument is required.
imageDigest specifies the digest of the image to be
downloaded. Skopeo can be used to get the digest of an image, with its
inspect subcommand. Since a given
imageName may transparently refer to a manifest list
of images which support multiple architectures and/or operating systems,
supply the `--override-os` and `--override-arch` arguments to specify
exactly which image you want. By default it will match the OS and
architecture of the host the command is run on.
$ nix-shell --packages skopeo jq --command "skopeo --override-os linux --override-arch x86_64 inspect docker://docker.io/nixos/nix:1.11 | jq -r '.Digest'"
sha256:20d9485b25ecfd89204e843a962c1bd70e9cc6858d65d7f5fadc340246e2116b
This argument is required.
finalImageTag, if specified, this is the tag of the
image to be created. Note it is never used to fetch the image since we
prefer to rely on the immutable digest ID. By default it's
latest.
sha256 is the checksum of the whole fetched image.
This argument is required.
os, if specified, is the operating system of the
fetched image. By default it's linux.
arch, if specified, is the cpu architecture of the
fetched image. By default it's x86_64.
exportImage
This function is analogous to the docker export command,
in that can used to flatten a Docker image that contains multiple layers.
It is in fact the result of the merge of all the layers of the image. As
such, the result is suitable for being imported in Docker with
docker import.
Using this function requires the kvm device to be
available.
The parameters of exportImage are the following:
Docker export
exportImage {
fromImage = someLayeredImage;
fromImageName = null;
fromImageTag = null;
name = someLayeredImage.name;
}
The parameters relative to the base image have the same synopsis as
described in , except
that fromImage is the only required argument in this
case.
The name argument is the name of the derivation output,
which defaults to fromImage.name.
shadowSetup
This constant string is a helper for setting up the base files for managing
users and groups, only if such files don't exist already. It is suitable
for being used in a runAsRoot script for cases like
in the example below:
Shadow base files
buildImage {
name = "shadow-basic";
runAsRoot = ''
#!${stdenv.shell}
${shadowSetup}
groupadd -r redis
useradd -r -g redis redis
mkdir /data
chown redis:redis /data
'';
}
Creating base files like /etc/passwd or
/etc/login.defs are necessary for shadow-utils to
manipulate users and groups.