We do not use the generic "nested" function but introduce a separate
subtest log call. This will later allow us to track subtests and account
logs to specific subtests.
When passing a path to restartTriggers or reloadTriggers, X-Restart/Reload-Triggers
will get populated by the absolute path of the file on the machine where the
config is evaluated. This patch corrects this behavior.
This allows us to set things like dependencies in a way that we can
catch typos at eval time.
So instead of
```nix
systemd.services.foo.wants = [ "bar.service" ];
```
we can write
```nix
systemd.services.foo.wants = [ config.systemd.services.bar.name ];
```
which will throw an error if no such service has been defined.
Not all cases can be done like this (eg template services), but in a lot
of cases this will allow to avoid typos.
There is a matching option on the unit option
(`systemd.units."foo.service".name`) as well.
As the TODO says, this is already included by the script.
If adding a device, including this again here would result in either
two devices being added, or, if they were explicitly named, an error
due to reuse of the name.
- use normal VM nodes for target, with some extra trickery
- rename preBootCommands to postBootCommands to match its actual intent
- rename VMs to installer and target, so they're not all called machine
- set platforms on non-UEFI tests properly
- add missing packages for systemd-boot test
- fix initrd secrets leaking into the store and having wrong paths
Right now the worst case chain of events for building an ISO on Hydra is
- copy everything to squashfs builder
- run squashfs builder
- download squashfs from builder
- compress squashfs
- upload squashfs to S3
- copy squashfs to ISO builder
- run ISO builder
- download ISO from builder
- compress ISO
- upload ISO to S3
This inlines the squashfs build into the ISO build, which makes it
- copy everything to ISO builder
- run ISO builder
- download ISO from builder
- compress ISO
- upload ISO to S3
Which should reduce queue runner load by $alot per ISO, which we have four of on small channels
(one release, one test per arch) and a lot more than four of on large channels (with various desktops)
Closes#193336Closes#261694
Related to #108984
The goal here was to get the following flake to build and run on
`aarch64-darwin`:
```nix
{ inputs.nixpkgs.url = <this branch>;
outputs = { nixpkgs, ... }: {
checks.aarch64-darwin.default =
nixpkgs.legacyPackages.aarch64-darwin.nixosTest {
name = "test";
nodes.machine = { };
testScript = "";
};
};
}
```
… and after this change it does. There's no longer a need for the
user to set `nodes.*.nixpkgs.pkgs` or
`nodes.*.virtualisation.host.pkgs` as the correct values are inferred
from the host system.
systemd-boot-builder.py calls nix-env --list-generations which creates
$HOME/.nix-defexpr/channels/nixos if it doesn't exist. This would cause a folder
/homeless-shelter to show up in the final image which in turn breaks nix builds
in the target image if sandboxing is turned off (as /homeless-shelter is never
allowed to exist).
Adds a function to wait for a new QMP event with a model filter
so that you can expect specific type of events with specific payloads.
e.g. a guest-reset-induced shutdown event.
Building a python environment with python3Minimal requires hydra
to bootstrap pip and build all packages used in the environment
which would otherwise not be built. This reduces cache re-use and duplicates things.
Also common dependencies normally included in python itself
are not properly checked and can cause hard to debug errors
because everyone just assumes those modules are there.
Aliases exist for a reason. Sure it is nice to make sure that
some aliases aren't used within Nixpkgs, but this creates two problems
which are far worse than your failing to meet your neatness compulsions.
- Users encounter missing attributes, https://github.com/NixOS/nixpkgs/issues/264577
wasting their time, stalling their progress, and even occupying others
time that would be better spent on fixing *real* issues.
- Hydra doesn't treat evaluation errors seriously enough, with the
effect that actual relevant test failures are masked by evaluation
failures such as those caused by this no aliases business.
- We don't even have the infrastructure to get rid of aliases, because
all warnings in package attributes are disallowed by Nixpkgs CI
tooling, last I checked.
Before re-disabling this, make sure that
- An actually helpful deprecation process is in place.
- Aliases are still allowed when `nixos-lib.runTests` and
`pkgs.testers.runNixOSTest` are invoked by external projects.
For instance, `all-tests.nix` could provide such an
override (e.g. with `newScope`).
This is a fixup for c1ae82f448.
nix' `passAsFile` does not create empty files for variables that are
`null`.
This results in the following error for units that have no overrides or
content, but are, e.g. `wantedBy`:
`mv: cannot stat '': No such file or directory`.
Minimal reproducer:
`systemd.units.empty.wantedBy = [ "multi-user.target" ];`
This is often necessary when a unit is loaded in via `systemd.packages`.
From now on, we will aim to ensure that the test driver
gets tested by OfBorg using all our available tests.
This commit adds the driver timeout test to the driver.
For `testBuildFailure` and similar functions, we need a full blown derivation and not a lazy one.
This is an internal option for test framework developers.
Since the debut of the test-driver, we didn't obtain
a race timer with the test execution to ensure that tests doesn't run beyond
a certain amount of time.
This is particularly important when you are running into hanging tests
which cannot be detected by current facilities (requires more pvpanic wiring up, QMP
API stuff, etc.).
Two easy examples:
- Some QEMU tests may get stuck in some situation and run for more than 24 hours → we default to 1 hour max.
- Some QEMU tests may panic in the wrong place, e.g. UEFI firmware or worse → end users can set a "reasonable" amount of time
And then, we should let the retry logic retest them until they succeed and adjust
their global timeouts.
Of course, this does not help with the fact that the timeout may need to be
a function of the actual busyness of the machine running the tests.
This is only one step towards increased reliability.
Now that we have a QMP client, we can wire it up in the test driver.
For now, it is almost completely useless because of the need of a constant "event loop", especially
for event listening.
In the next commits, we will slowly enable more and more usecases.
For some time now the attrset returned by `evalModules` has
`type = "configuration"`.
This is a clean refactor because the name is not exposed.
(never is for simple lambda)
When listening on unix sockets, it doesn't make sense to specify a port
for nginx's listen directive.
Since nginx defaults to port 80 when the port isn't specified (but the
address is), we can change the default for the option to null as well
without changing any behaviour.