Building a python environment with python3Minimal requires hydra
to bootstrap pip and build all packages used in the environment
which would otherwise not be built. This reduces cache re-use and duplicates things.
Also common dependencies normally included in python itself
are not properly checked and can cause hard to debug errors
because everyone just assumes those modules are there.
Aliases exist for a reason. Sure it is nice to make sure that
some aliases aren't used within Nixpkgs, but this creates two problems
which are far worse than your failing to meet your neatness compulsions.
- Users encounter missing attributes, https://github.com/NixOS/nixpkgs/issues/264577
wasting their time, stalling their progress, and even occupying others
time that would be better spent on fixing *real* issues.
- Hydra doesn't treat evaluation errors seriously enough, with the
effect that actual relevant test failures are masked by evaluation
failures such as those caused by this no aliases business.
- We don't even have the infrastructure to get rid of aliases, because
all warnings in package attributes are disallowed by Nixpkgs CI
tooling, last I checked.
Before re-disabling this, make sure that
- An actually helpful deprecation process is in place.
- Aliases are still allowed when `nixos-lib.runTests` and
`pkgs.testers.runNixOSTest` are invoked by external projects.
For instance, `all-tests.nix` could provide such an
override (e.g. with `newScope`).
This is a fixup for c1ae82f448.
nix' `passAsFile` does not create empty files for variables that are
`null`.
This results in the following error for units that have no overrides or
content, but are, e.g. `wantedBy`:
`mv: cannot stat '': No such file or directory`.
Minimal reproducer:
`systemd.units.empty.wantedBy = [ "multi-user.target" ];`
This is often necessary when a unit is loaded in via `systemd.packages`.
From now on, we will aim to ensure that the test driver
gets tested by OfBorg using all our available tests.
This commit adds the driver timeout test to the driver.
For `testBuildFailure` and similar functions, we need a full blown derivation and not a lazy one.
This is an internal option for test framework developers.
Since the debut of the test-driver, we didn't obtain
a race timer with the test execution to ensure that tests doesn't run beyond
a certain amount of time.
This is particularly important when you are running into hanging tests
which cannot be detected by current facilities (requires more pvpanic wiring up, QMP
API stuff, etc.).
Two easy examples:
- Some QEMU tests may get stuck in some situation and run for more than 24 hours → we default to 1 hour max.
- Some QEMU tests may panic in the wrong place, e.g. UEFI firmware or worse → end users can set a "reasonable" amount of time
And then, we should let the retry logic retest them until they succeed and adjust
their global timeouts.
Of course, this does not help with the fact that the timeout may need to be
a function of the actual busyness of the machine running the tests.
This is only one step towards increased reliability.
Now that we have a QMP client, we can wire it up in the test driver.
For now, it is almost completely useless because of the need of a constant "event loop", especially
for event listening.
In the next commits, we will slowly enable more and more usecases.
For some time now the attrset returned by `evalModules` has
`type = "configuration"`.
This is a clean refactor because the name is not exposed.
(never is for simple lambda)
When listening on unix sockets, it doesn't make sense to specify a port
for nginx's listen directive.
Since nginx defaults to port 80 when the port isn't specified (but the
address is), we can change the default for the option to null as well
without changing any behaviour.
This also makes configuration available if you just run those tools locally.
Also use ruff instead of pylint because it's faster and more
comprehensive.