Introduce an AWS EC2 AMI which supports aarch64 and x86_64 with a ZFS
root.
This uses `make-zfs-image` which implies two EBS volumes are needed
inside EC2, one for boot, one for root. It should not matter which
is identified `xvda` and which is `xvdb`, though I have always
uploaded `boot` as `xvda`.
iptables is currently defined in `all-packages.nix` to be
iptables-compat. That package does however not contain `ethertypes`.
Only `iptables-nftables-compat` contains this file so the symlink
dangles.
Both networking.nat.enable and virtualisation.docker.enable now want to
make sure that the IP forwarding sysctl is enabled, but the module
system dislikes that both modules contain this option.
Realistically this should be refactored a bit, so that the Docker module
automatically enables the NAT module instead, but this is a more obvious
fix.
Changed the startup timeout from 15 seconds to one minute as 15 seconds is really low.
Also it's currently not possible to change it without editing your system configuration.
This is analogous to #70447 and #76487.
These are all needed to attach a container to the default bridge
network, without which the final line of the following script fails with
the error for each respective kernel module listed below.
```sh
lxc storage create foo dir
lxc launch -s foo ubuntu:trusty bar
lxc network attach lxdbr0 bar
```
veth
----
> Error: Failed to start device 'lxdbr0': Failed to create the veth interfaces vethefbc3cd6 and vetha4abbcbc: Failed to run: ip link add dev vethefbc3cd6 type veth peer name vetha4abbcbc: RTNETLINK answers: Operation not supported
iptable_mangle
--------------
> lvl=eror msg="Failed to bring up network" err="Failed to list ipv4 rules for LXD network lxdbr0 (table mangle)" name=lxdbr0
xt_comment
----------
> lvl=error msg="Failed to bring up network" err="Failed to run: iptables -w -t filter -I INPUT -i lxdbr0 -p udp --dport 67 -j ACCEPT -m comment --comment generated for LXD network lxdbr0: iptables v1.8.4 (legacy): Couldn't load match `comment':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information." name=lxdbr0
xt_MASQUERADE
-------------
> vl=eror msg="Failed to bring up network" err="Failed to run: iptables -w -t nat -I POSTROUTING -s 10.0.107.0/24 ! -d 10.0.107.0/24 -j MASQUERADE -m comment --comment generated for LXD network lxdbr0: iptables v1.8.4 (legacy): Couldn't load target `MASQUERADE':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information." name=lxdbr0
systemd-nspawn can react to SIGTERM and send a shutdown signal to the container
init process. use that instead of going through dbus and machined to request
nspawn sending the signal, since during host shutdown machined or dbus may have
gone away by the point a container unit is stopped.
to solve the issue that a container that is still starting cannot be stopped
cleanly we must also handle this signal in containerInit/stage-2.
At the moment, it's not possible to override the libvirtd package used
without supplying a nixpkgs overlay. Adding a package option makes
libvirtd more consistent and allows enabling e.g. ceph and iSCSI support
more easily.
Xen is now enabled unconditionally on kernels that support it, so the
xen_dom0 feature doesn't do anything. The isXen attribute will now
produce a deprecation warning and unconditionally return true.
Passing in a custom value for isXen is no longer supported.
This dependency has been added in 65eae4d, when NixOS switched to
systemd, as a substitute for the previous udevtrigger and hasn't been
touched since. It's probably unneeded as the upstream unit[1] doesn't
do it and I haven't found any mention of any problem in NixOS or the
upstream issue trackers.
[1]: https://gitlab.com/libvirt/libvirt/-/blob/master/src/remote/libvirtd.service.in
Related to #85746 which addresses documentation issue,
digging deeper for a reason why this was disabled
was simply because it wasn't working which is not the case anymore.
- Actually use the zfsSupport option
- Add documentation URI to lxd.service
- Add lxd.socket to enable socket activatation
- Add proper dependencies and remove systemd-udev-settle from lxd.service
- Set up /var/lib/lxc/rootfs using systemd.tmpfiles
- Configure safe start and shutdown of lxd.service
- Configure restart on failures of lxd.service
Launching a container with a private network requires creating a
dedicated networking interface for it; name of that interface is derived
from the container name itself - e.g. a container named `foo` gets
attached to an interface named `ve-foo`.
An interface name can span up to IFNAMSIZ characters, which means that a
container name must contain at most IFNAMSIZ - 3 - 1 = 11 characters;
it's a limit that we validate using a build-time assertion.
This limit has been upgraded with Linux 5.8, as it allows for an
interface to contain a so-called altname, which can be much longer,
while remaining treated as a first-class citizen.
Since altnames have been supported natively by systemd for a while now,
due diligence on our side ends with dropping the name-assertion on newer
kernels.
This commit closes#38509.
systemd/systemd#14467systemd/systemd#17220https://lwn.net/Articles/794289/
Ensure that the HyperV keyboard driver is available in the early
stages of the boot process. This allows the user to enter a disk
encryption passphrase or repair a boot problem in an interactive
shell.
We now set the hooks dir correctly if the OCI hook is enabled. CRI-O
supports this specific hook from v1.20.0.
Signed-off-by: Sascha Grunert <mail@saschagrunert.de>
Reintroduce the `fetch-ssh-keys` service so that GCE images that work
with NixOps can once again be built. Also, reformat the code a bit.
The service was removed in 88570538b3,
likely due to a comment saying it should be removed. It was still
needed for images to work with NixOps, however, and probably needed to
be replaced or rewritten rather than removed.
The socketActivation option was removed, but later on socket activation
was added back without the option to disable it. The description now reflects
that socket activation is used unconditionally in the current setup.
Since the introduction of option `containers.<name>.pkgs`, the
`nixpkgs.*` options (including `nixpkgs.pkgs`, `nixpkgs.config`, ...) were always
ignored in container configs, which broke existing containers.
This was due to `containers.<name>.pkgs` having two separate effects:
(1) It sets the source for the modules that are used to evaluate the container.
(2) It sets the `pkgs` arg (`_module.args.pkgs`) that is used inside the container
modules.
This happens even when the default value of `containers.<name>.pkgs` is unchanged, in which
case the container `pkgs` arg is set to the pkgs of the host system.
Previously, the `pkgs` arg was determined by the `containers.<name>.config.nixpkgs.*` options.
This commit reverts the breaking change (2) while adding a backwards-compatible way to achieve (1).
It removes option `pkgs` and adds option `nixpkgs` which implements (1).
Existing users of `pkgs` are informed by an error message to use option
`nixpkgs` or to achieve only (2) by setting option `containers.<name>.config.nixpkgs.pkgs`.