For example, the wait_for_unit() call in the Moodle test times out for
myself and others[1], so it would be good to be able to increase it to
something less likely to be hit by a test that would otherwise pass.
[1]: https://github.com/NixOS/nixpkgs/pull/177052#issue-1266336706
Within a dual VM test-setup a strange behaviour was observed.
The two VMs are connected via one vde_switch instance
(instancevirtualisation.vlans = [ 1 ]; IMO a bad attribute name for
switch instances, has nothing to do with VLANs in sense of 802.1Q).
A ping on the base interface (eth1) works, but not on VLAN
subinterfaces (vlan1@eth1). A tcpdump of eth1 includes the ARP requests
tagged with the subinterfaces VLAN ID, but responses seems not to pass
the vde_switch. This works fine if performed on the base interface.
Putting the vde_switch in hub mode results in flooding
traffic to all vde_switch ports. This results in a expected behaviour
and a ping on a VLAN subinterface works as expected.
Signed-off-by: Philippe Schaaf <philippe.schaaf@secunet.com>
Without this fix, setting the shellopts in `machine.execute` is
inconsitent. When no timeout is used, shellopts `set -euo pipefail` are
applied to the command as expected. When a timeout is specified, the
shellopts are not applied to the command itself (which is called inside
a `sh -c` that doesn't inherit the shellopts) but rather to the
`timeout` command, leading to the following full command:
```bash
(set -euo pipefail; timeout 900 sh -c 'cmd') | (base64 --wrap 0; echo)\n
```
With this fix, this is the command we get:
```bash
timeout 900 sh -c 'set -euo pipefail; false | true') | (base64 --wrap 0; echo)\n
```
Naively deduplicate VLANs in the python driver for NixOS tests. The
current implementation accidentally works, since the VLan class mutates
the environment. On construction it sets QEMU_VDE_SOCKET_${id} and this
environment variable gets overwritten once a second VLAN with the same
id is constructed. Because the NIC flags passed to qemu just use the
QEMU_VDE_SOCKET_${id} environment variable, this implicitly chooses a
single vde_switch process for each VLAN.
However, this leads to unusable vde_switch processes being spawned in
each test run and as a side effect makes it impossible to access the
correct VLan objects in the interactive test driver. It also makes it
remarkably hard to understand why the current implementation ever
worked.
The aarch64-linux versions of the boot.uefiUsb and boot.uefiCdrom tests
were broken by b0fc9da879.
That commit was a refactor which omitted the qemuBinary option, which was
previously available in the legacy start command. This restores that
option and fixes the tests previously mentioned.
When displaying the amount of time some step took, with no other
context, it becomes nigh impossible (especially in longer tests) to see
when specific steps finished.
The current implementation just forks off a thread to read
QEMU's stdout and lets it exist forever. This, however,
makes the interpreter shutdown racy, as the thread could
still be running and writing out buffered stdout when the
main thread exits (and since it's using the low level API,
the worker thread does not get cleaned up by the atexit hooks
installed by `threading`, either). So, instead of doing that,
let's create a real `threading.Thread` object, and also
explicitly `join` it along with the other stuff when cleaning up.
We have no usecase for manually/selectively starting or stopping VLANs
in integration tests.
By starting and stopping the VLANs with the constructor and destructor
of VLAN objects, we remove the obligation and complexity to maintain
network lifetime separately.
This commit encapsulates the involved domain into classes and
defines explicit and typed arguments where untyped dicts where used.
It preserves backwards compatibility through legacy wrappers.