Apparently, a non-existent nsswitch.conf causes a very misleading host
resolution, differing from the defaults people are used to.
According to
https://github.com/golang/go/issues/22846#issuecomment-346377144, glibc
says the default is "dns [!UNAVAIL=return] files".
This means, `/etc/hosts` isn't really honored, causing all sorts of
unexpected behaviour.
Let's prevent this, and first ask `/etc/hosts` before querying DNS, like
we do on NixOS too.
skopeo 1.4.x doesn't accept --src-tls-verify as a flag to the *program*,
only as a flag to copy; we must pass it after the "copy" verb, or it
will fail with:
> FATA[0000] unknown flag: --src-tls-verify
skopeo will disable the progress bar if it detects that stdout isn't a
TTY - in order to make it think that stdout _isn't_ a TTY and therefore
avoid it printing a lot of "…" on separate lines, we pipe the output
through cat.
This changes the output from:
…
…
…
…
…
…
to the eminently more useful and less spammy:
Getting image source signatures
Copying blob sha256:[snip]
Copying blob sha256:[snip]
Copying blob sha256:[snip]
Copying config sha256:[snip]
Writing manifest to image destination
Storing signatures
For https://github.com/NixOS/nixpkgs/pull/125211 I tried to test
the fetcher with
nix-build -A dockerTools.examples.nixFromDockerHub --option substitute false
But it failed. I haven't figured out the cause, but the outputs
match, so it's probably the hashing method (flat/recursive) that
changed at some point. (The names did match.)
Adds includeStorePaths, allowing the omission of the store paths.
You generally want to leave it on, but tooling may disable this
to insert the store paths more efficiently via other means, such
as bind mounting the host store.
The `docker load` command supports loading tarballs that contain
multiple docker images with their respective image names and tags. This
enables distributing these images as a single file which simplifies the
release of software when an application requires multiple services to
run.
However, pkgs.dockerTools only create tarballs with a single docker
image and there exists is no mechanism in nixpkgs to combine the created
tarballs. This commit implements merging of tarballs in a way that is
compatible with `docker load`.
For images running on Kubernetes, there is no guarantee on how duplicate
environment variables in the image config will be handled. This seems
to be different from Docker, where the last environment variable value
is consistently selected.
The current code for `streamLayeredImage` was exploiting that assumption
to easily propagate environment variables from the base image, leaving
duplicates unchecked. It should rather resolve these duplicates to
ensure consistent behavior on Docker and Kubernetes.
It is now possible to pass a `fromImage` to `buildLayeredImage` and
`streamLayeredImage`, similar to what `buildImage` currently supports.
This will prepend the layers of the given base image to the resulting
image, while ensuring that at most `maxLayers` are used. It will also
ensure that environment variables from the base image are propagated
to the final image.
When using `buildLayeredImage`, it is not possible to specify an image
name of the form `<registry>/my/image`, although it is a valid name.
This is due to derivations under `buildLayeredImage` using that image
name as their derivation name, but slashes are not permitted in that
context.
A while ago, #13099 fixed that exact same problem in `buildImage` by
using `baseNameOf name` in derivation names instead of `name`. This
change does the same thing for `buildLayeredImage`.
`stream_layered_image.py` currently assumes that the store root will be
at `/nix/store`, although the user might have configured this
differently. This makes `buildLayeredImage` unusable with stores having
a different root, as they will fail an assertion in the python script.
This change updates that assertion to use `builtins.storeDir` as the
source of truth about where the store lives, instead of assuming
`/nix/store`.
Warning about future breaking changes is wrong.
- It suggests that the maintainers don't value backwards compatibility.
They do.
- It implies that other parts of Nixpkgs won't ever break. They will.
- It implies that a well-defined "public" interface exists. It doesn't.
- If the reasons above didn't apply, it should have been in the manual
instead.
Breaking changes will come, especially to the interface. That can be the
only way we can make progress without breaking the image _contents_.
I don't think dockerTools is any different from most of Nixpkgs in
these regards.
Docker (via containerd) and the the OCI Image Configuration imply and
suggest, respectfully, that the architecture set in images matches those
of GOARCH in the Go Language document.
This changeset updates the implimentation of getArch in dockerTools to
return GOARCH values, to satisfy Docker.
Fixes: #106695
This provides a /etc/passwd and /etc/group that contain root and nobody.
Useful when packaging binaries that insist on using nss to look up
username/groups (like nginx).
The current nginx example used the `runAsRoot` parameter to setup
/etc/group and /etc/passwd (which also doesn't exist in
buildLayeredImage), so we can now just use fakeNss there and use
buildLayeredImage.
Informational messages belong on stderr, not on stdout and intermixed
with structured output for programmatic use.
Change-Id: I34d094d04460494e9ec8953db7490f4e2292d959
The chroot environment under mnt had /dev and /sys via bind mounts,
but nothing setting up /proc. The `--mount-proc` argument to unshare
defaults to /proc, which is outside of the chroot envirnoment.
The image tag can be specified or generated from the output hash.
Previously, a generated tag could be recovered from the evaluated
image with some string operations.
However, with the introduction of streamLayeredImage, it's not
feasible to compute the generated tag yourself.
With this change, the imageTag attribute is set unconditionally,
for the buildImage, buildLayeredImage, streamLayeredImage functions.
In https://github.com/NixOS/nixpkgs/pull/58431 the authors ensured that
the resulting layer.tar would always list
/nix/
/nix/store/
first to fully comply to the tar spec. Various refactorings later it is only
ensured to create /nix/ but NOT /nix/store anymore. Instead tar transformed
them to /nix/nix and /nix/nix/store.
Calculating the tarsum after creating a layer is inefficient, since
we have to read the tarball we've just written from the disk.
This commit simultaneously calculates the tarsum while creating the
tarball.
Appending to an existing tar archive repeatedly seems to be a quadratic
operation, since tar seems to traverse the existing archive even using
the `-r, --append` flag. This commit avoids that by passing the list of
files to a single tar invocation.
This is useful when buildLayeredImage is called in a generic way
that should allow simple (base) images to be built, which may not
reference any store paths.
Fixes#78744
My previous change broke when there are more packages than the maximum
number of layers. I had assumed that the `store-path-to-layer.sh` was
only ever passed a single store path, but that is not the case if
there are multiple packages going into the final layer. To fix this, we
loop through the paths going into the final layer, appending them to the
tar file and making sure they end up at the right path.
Since a layer is reserved for "customization", the image can not
contains less than 2 layers.
The user gets the following message at evaluation:
nix-instantiate nixos/tests/docker-tools.nix
trace: the maxLayers argument of dockerTools.buildLayeredImage function must be greather than 1 (current value: 1)
Building a docker image with darwin binaries just yields a confusing
error when ran:
standard_init_linux.go:211: exec user process caused "exec format error"
This change prevents people from building such images in the first place
when tar'ing store paths into layered archives when building layered
images, don't use the absolute nix store path so that tar won't complain
if something new is added to the nix store
when building the final docker image, ignore any file changes tar
detects in the layers. they are all immutable and the only thing that
might change is the number of hard links due to store optimization
Before, every docker image had three extra layers:
1. A `closure` layer which is an internal implementation detail of
calculating the closure of the container
2. a `name-config.json` layer which is the images' run-time
configuration, and has no business being *in* the image as a layer.
3. a "bulk-layers" layer which is again and implementation detail
around collecting the image's closure.
None of these layers need to be in the final product.
dockerTools.buildImageWithNixDb: export USER
Changes to Nix user detection (./src/nix-channel/nix-channel.cc#L-166)
cause this function to error. Exporting USER fixes this.
The architecture of an image should default to the architecture for
which that image is being composed or pulled. buildPackages.go.GOARCH is
an easy way to compute that architecture with the correct terminology.
PR #58431 added /nix/store to each layer.tar. However, the timestamp was
not explicitly set while adding /nix and /nix/store to the archive. This
resulted in different SHA256 hashes of layer.tar between image builds.
This change sets time and owner when tar'ing /nix/store.
The layer order was not correct when a parent image was used: parent
image layers were above the new created layer.
This commits simplifies the code related to layer ordering. In
particular, layers in `layer-list` are ordered from bottom-most to
top-most. This is also the order of layers in the `rootfs.diff_ids`
attribute of the image configuration.
Whenever we create scripts that are installed to $out, we must use runtimeShell
in order to get the shell that can be executed on the machine we create the
package for. This is relevant for cross-compiling. The only use case for
stdenv.shell are scripts that are executed as part of the build system.
Usages in checkPhase are borderline however to decrease the likelyhood
of people copying the wrong examples, I decided to use runtimeShell as well.
bcf54ce5bb introduced a treewide change to
use ${stdenv.shell} where-ever possible. However, this broke a script
used by dockerTools, store-path-to-layer.sh, as it did not preserve the
+x mode bit. This meant the file got put into the store as mode 0444,
resulting in a build-time error later on that looked like:
xargs: /nix/store/jixivxhh3c8sncp9xlkc4ls3y5f2mmxh-store-path-to-layer.sh: Permission denied
However, in a twist of fate, bcf54ce5bb
not only introduced this regression but, in this particular instance,
didn't even fix the original bug: the store-path-to-layer.sh script
*still* uses /bin/sh as its shebang line, rather than an absolute path
to stdenv. (Fixing this can be done in a separate commit.)
Signed-off-by: Austin Seipp <aseipp@pobox.com>
This patch preserves the ordering of layers of a parent image when the
new image is packed.
It is currently not the case: layers are stacked in the reverse order.
Fixes#55290
Docker images used to be, essentially, a linked list of layers. Each
layer would have a tarball and a json document pointing to its parent,
and the image pointed to the top layer:
imageA ----> layerA
|
v
layerB
|
v
layerC
The current image spec changed this format to where the Image defined
the order and set of layers:
imageA ---> layerA
|--> layerB
`--> layerC
For backwards compatibility, docker produces images which follow both
specs: layers point to parents, and images also point to the entire
list:
imageA ---> layerA
| |
| v
|--> layerB
| |
| v
`--> layerC
This is nice for tooling which supported the older version and never
updated to support the newer format.
Our `buildImage` code only supported the old version, so in order for
`buildImage` to properly generate an image based on another image
with `fromImage`, the parent image's layers must fully support the old
mechanism.
This is not a problem in general, but is a problem with
`buildLayeredImage`.
`buildLayeredImage` creates images with newer image spec, because
individual store paths don't have a guaranteed parent layer. Including
a specific parent ID in the layer's json makes the output less likely
to cache hit when published or pulled.
This means until now, `buildLayeredImage` could not be the input to
`buildImage`.
The changes in this PR change `buildImage` to only use the layer's
manifest when locating parent IDs. This does break buildImage on
extremely old Docker images, though I do wonder how many of these
exist.
This work has been sponsored by Target.