iptables is currently defined in `all-packages.nix` to be
iptables-compat. That package does however not contain `ethertypes`.
Only `iptables-nftables-compat` contains this file so the symlink
dangles.
Both networking.nat.enable and virtualisation.docker.enable now want to
make sure that the IP forwarding sysctl is enabled, but the module
system dislikes that both modules contain this option.
Realistically this should be refactored a bit, so that the Docker module
automatically enables the NAT module instead, but this is a more obvious
fix.
Changed the startup timeout from 15 seconds to one minute as 15 seconds is really low.
Also it's currently not possible to change it without editing your system configuration.
This is analogous to #70447 and #76487.
These are all needed to attach a container to the default bridge
network, without which the final line of the following script fails with
the error for each respective kernel module listed below.
```sh
lxc storage create foo dir
lxc launch -s foo ubuntu:trusty bar
lxc network attach lxdbr0 bar
```
veth
----
> Error: Failed to start device 'lxdbr0': Failed to create the veth interfaces vethefbc3cd6 and vetha4abbcbc: Failed to run: ip link add dev vethefbc3cd6 type veth peer name vetha4abbcbc: RTNETLINK answers: Operation not supported
iptable_mangle
--------------
> lvl=eror msg="Failed to bring up network" err="Failed to list ipv4 rules for LXD network lxdbr0 (table mangle)" name=lxdbr0
xt_comment
----------
> lvl=error msg="Failed to bring up network" err="Failed to run: iptables -w -t filter -I INPUT -i lxdbr0 -p udp --dport 67 -j ACCEPT -m comment --comment generated for LXD network lxdbr0: iptables v1.8.4 (legacy): Couldn't load match `comment':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information." name=lxdbr0
xt_MASQUERADE
-------------
> vl=eror msg="Failed to bring up network" err="Failed to run: iptables -w -t nat -I POSTROUTING -s 10.0.107.0/24 ! -d 10.0.107.0/24 -j MASQUERADE -m comment --comment generated for LXD network lxdbr0: iptables v1.8.4 (legacy): Couldn't load target `MASQUERADE':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information." name=lxdbr0
systemd-nspawn can react to SIGTERM and send a shutdown signal to the container
init process. use that instead of going through dbus and machined to request
nspawn sending the signal, since during host shutdown machined or dbus may have
gone away by the point a container unit is stopped.
to solve the issue that a container that is still starting cannot be stopped
cleanly we must also handle this signal in containerInit/stage-2.
At the moment, it's not possible to override the libvirtd package used
without supplying a nixpkgs overlay. Adding a package option makes
libvirtd more consistent and allows enabling e.g. ceph and iSCSI support
more easily.
Xen is now enabled unconditionally on kernels that support it, so the
xen_dom0 feature doesn't do anything. The isXen attribute will now
produce a deprecation warning and unconditionally return true.
Passing in a custom value for isXen is no longer supported.
This dependency has been added in 65eae4d, when NixOS switched to
systemd, as a substitute for the previous udevtrigger and hasn't been
touched since. It's probably unneeded as the upstream unit[1] doesn't
do it and I haven't found any mention of any problem in NixOS or the
upstream issue trackers.
[1]: https://gitlab.com/libvirt/libvirt/-/blob/master/src/remote/libvirtd.service.in
Related to #85746 which addresses documentation issue,
digging deeper for a reason why this was disabled
was simply because it wasn't working which is not the case anymore.
- Actually use the zfsSupport option
- Add documentation URI to lxd.service
- Add lxd.socket to enable socket activatation
- Add proper dependencies and remove systemd-udev-settle from lxd.service
- Set up /var/lib/lxc/rootfs using systemd.tmpfiles
- Configure safe start and shutdown of lxd.service
- Configure restart on failures of lxd.service
Launching a container with a private network requires creating a
dedicated networking interface for it; name of that interface is derived
from the container name itself - e.g. a container named `foo` gets
attached to an interface named `ve-foo`.
An interface name can span up to IFNAMSIZ characters, which means that a
container name must contain at most IFNAMSIZ - 3 - 1 = 11 characters;
it's a limit that we validate using a build-time assertion.
This limit has been upgraded with Linux 5.8, as it allows for an
interface to contain a so-called altname, which can be much longer,
while remaining treated as a first-class citizen.
Since altnames have been supported natively by systemd for a while now,
due diligence on our side ends with dropping the name-assertion on newer
kernels.
This commit closes#38509.
systemd/systemd#14467systemd/systemd#17220https://lwn.net/Articles/794289/
Ensure that the HyperV keyboard driver is available in the early
stages of the boot process. This allows the user to enter a disk
encryption passphrase or repair a boot problem in an interactive
shell.
We now set the hooks dir correctly if the OCI hook is enabled. CRI-O
supports this specific hook from v1.20.0.
Signed-off-by: Sascha Grunert <mail@saschagrunert.de>
Reintroduce the `fetch-ssh-keys` service so that GCE images that work
with NixOps can once again be built. Also, reformat the code a bit.
The service was removed in 88570538b3,
likely due to a comment saying it should be removed. It was still
needed for images to work with NixOps, however, and probably needed to
be replaced or rewritten rather than removed.
The socketActivation option was removed, but later on socket activation
was added back without the option to disable it. The description now reflects
that socket activation is used unconditionally in the current setup.
Since the introduction of option `containers.<name>.pkgs`, the
`nixpkgs.*` options (including `nixpkgs.pkgs`, `nixpkgs.config`, ...) were always
ignored in container configs, which broke existing containers.
This was due to `containers.<name>.pkgs` having two separate effects:
(1) It sets the source for the modules that are used to evaluate the container.
(2) It sets the `pkgs` arg (`_module.args.pkgs`) that is used inside the container
modules.
This happens even when the default value of `containers.<name>.pkgs` is unchanged, in which
case the container `pkgs` arg is set to the pkgs of the host system.
Previously, the `pkgs` arg was determined by the `containers.<name>.config.nixpkgs.*` options.
This commit reverts the breaking change (2) while adding a backwards-compatible way to achieve (1).
It removes option `pkgs` and adds option `nixpkgs` which implements (1).
Existing users of `pkgs` are informed by an error message to use option
`nixpkgs` or to achieve only (2) by setting option `containers.<name>.config.nixpkgs.pkgs`.
It's been 8.5 years since NixOS used mingetty, but the option was
never renamed (despite the file definining the module being renamed in
9f5051b76c ("Rename mingetty module to agetty")).
I've chosen to rename it to services.getty here, rather than
services.agetty, because getty is implemantation-neutral and also the
name of the unit that is generated.
... build-vm-with-bootloader" for EFI systems
This reverts commit 20257280d9, reversing
changes made to 926a1b2094.
It broke nixosTests.installer.simpleUefiSystemdBoot
and right now channel is lagging behing for two weeks.
`nixos-rebuild build-vm-with-bootloader` currently fails with the
default NixOS EFI configuration:
$ cat >configuration.nix <<EOF
{
fileSystems."/".device = "/dev/sda1";
boot.loader.systemd-boot.enable = true;
boot.loader.efi.canTouchEfiVariables = true;
}
EOF
$ nixos-rebuild build-vm-with-bootloader -I nixos-config=$PWD/configuration.nix -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/nixos-20.09.tar.gz
[...]
insmod: ERROR: could not insert module /nix/store/1ibmgfr13r8b6xyn4f0wj115819f359c-linux-5.4.83/lib/modules/5.4.83/kernel/fs/efivarfs/efivarfs.ko.xz: No such device
mount: /sys/firmware/efi/efivars: mount point does not exist.
[ 1.908328] reboot: Power down
builder for '/nix/store/dx2ycclyknvibrskwmii42sgyalagjxa-nixos-boot-disk.drv' failed with exit code 32
[...]
Fix it by setting virtualisation.useEFIBoot = true in qemu-vm.nix, when
efi is needed.
And remove the now unneeded configuration in
./nixos/tests/systemd-boot.nix, since it's handled globally.
Before:
* release-20.03: successful build, unsuccessful run
* release-20.09 (and master): unsuccessful build
After:
* Successful build and run.
Fixes https://github.com/NixOS/nixpkgs/issues/107255
Fixes that `containers.<name>.extraVeths.<name>` configuration was not
always applied.
When configuring `containers.<name>.extraVeths.<name>` and not
configuring one of `containers.<name>.localAddress`, `.localAddress6`,
`.hostAddress`, `.hostAddress6` or `.hostBridge` the veth was created,
but otherwise no configuration (i.e. no ip) was applied.
nixos-container always configures the primary veth (when `.localAddress`
or `.hostAddress` is set) to be the containers default gateway, so
this fix is required to create a veth in containers that use a different
default gateway.
To test this patch configure the following container and check if the
addresses are applied:
```
containers.testveth = {
extraVeths.testveth = {
hostAddress = "192.168.13.2";
localAddress = "192.168.13.1";
};
config = {...}:{};
};
```