NOTES:
@jakeschurch did not realize that is was already updated on master, but not
backported to 23.05 channel
Signed-off-by: Jake Schurch <jakeschurch@gmail.com>
This change removes the bespoke logic around identifying block devices.
Instead of trying to find the right device by iterating over
`qemu.drives` and guessing the right partition number (e.g.
/dev/vda{1,2}), devices are now identified by persistent names provided
by udev in /dev/disk/by-*.
Before this change, the root device was formatted on demand in the
initrd. However, this makes it impossible to use filesystem identifiers
to identify devices. Now, the formatting step is performed before the VM
is started. Because some tests, however, rely on this behaviour, a
utility function to replace this behaviour in added in
/nixos/tests/common/auto-format-root-device.nix.
Devices that contain neither a partition table nor a filesystem are
identified by their hardware serial number which is injecetd via QEMU
(and is thus persistent and predictable). PCI paths are not a reliably
way to identify devices because their availability and numbering depends
on the QEMU machine type.
This change makes the module more robust against changes in QEMU and the
kernel (non-persistent device naming) and by decoupling abstractions
(i.e. rootDevice, bootPartition, and bootLoaderDevice) enables further
improvement down the line.
As with many things, we have scenarios where we don't want to boot on a
disk / bootloader and also we don't want to boot directly.
Sometimes, we want to boot through an OptionROM of our NIC, e.g. netboot
scenarios or let the firmware decide something, e.g. UEFI PXE (or even
UEFI OptionROM!).
This is composed of:
- `directBoot.enable`: whether to direct boot or not
- `directBoot.initrd`: enable overriding the
`config.system.build.initialRamdisk` defaults, useful for
netbootRamdisk for example.
This makes it possible.
Adds a new option to the virtualisation modules that enables specifying explicitly named network interfaces in QEMU VMs.
The existing `virtualisation.vlans` option is still supported for cases where the name of the network interface is irrelevant.
Libvirt support calling user defined hooks on certains events.
Documentation can be found https://libvirt.org/hooks.html.
This commit allow specifying these hooks via the
virtualisation.libvirtd.hooks.<name>.* options
Calling `eval-config.nix` without a `system` from a Nix flake fails with
`error: attribute 'currentSystem' missing` since #230523. Setting
`system = null` removes the use of `currentSystem` and instead uses the
value from the `nixpkgs` module.
Context summary:
'vma create' can't otherwise write to tmpfs such as /dev/shm.
This is important when used from non-nixos machines which may
have /build as tmpfs.
VMA is Proxmox's virtual machine image format that wraps QEMU images,
augmenting these with proxmox-specific configuration file.
proxmox-image.nix uses the VMA tool to create vma image files.
The VMA tool exists as a patchset ontop of QEMU.
VMA writes its output with open() and O_DIRECT flag.
O_DIRECT does not work on Linux tmpfs [1]. Thus:
$ vma create ~/output.vma ... # works, assuming home isn't tmpfs.
$ vma create /dev/shm/output.vma ... # fails since /dev/shm is tmpfs
Failure results in assert(*errp == NULL).
O_DIRECT is a cache performance hint.
But it currently blocks our usage of nixos-generate -f proxmox from
Non-NixOS hosts and Docker.
The patch here simply removes O_DIRECT:
vma-writer.c later performs memalign due to O_DIRECT, but this is
safe to do with or without O_DIRECT.
Ideally, this should be fixed in upstream Proxmox: Perhaps by falling
back to open without O_DIRECT.
Another attempt to fix this SIGABRT is [2], which writes the vma file
directory to $out/ folder -- however that may still be tmpfs mounted
which it is in our case.
[1] https://lore.kernel.org/lkml/45A29EC2.8020502@tmr.com/t/
[2] https://github.com/NixOS/nixpkgs/pull/224282