For reasons yet unknown, the vxlan backend doesn't work (at least inside
the qemu networking), so this is moved to the udp backend.
Note changing the backend apparently also changes the interface name,
it's now `flannel0`, not `flannel.1`
fixes#74941
This was whitespace-sensitive, kept fighting with my editor and broke
the tests easily. To fix this, let python convert the output to
individual lines, and strip whitespace from them before comparing.
Also removed `pkgs.hydra-flakes` since flake-support has been merged
into master[1]. Because of that, `pkgs.hydra-unstable` is now compiled
against `pkgs.nixFlakes` and currently requires a patch since Hydra's
master doesn't compile[2] atm.
[1] https://github.com/NixOS/hydra/pull/730
[2] https://github.com/NixOS/hydra/pull/732
Upgrades Hydra to the latest master/flake branch. To perform this
upgrade, it's needed to do a non-trivial db-migration which provides a
massive performance-improvement[1].
The basic ideas behind multi-step upgrades of services between NixOS versions
have been gathered already[2]. For further context it's recommended to
read this first.
Basically, the following steps are needed:
* Upgrade to a non-breaking version of Hydra with the db-changes
(columns are still nullable here). If `system.stateVersion` is set to
something older than 20.03, the package will be selected
automatically, otherwise `pkgs.hydra-migration` needs to be used.
* Run `hydra-backfill-ids` on the server.
* Deploy either `pkgs.hydra-unstable` (for Hydra master) or
`pkgs.hydra-flakes` (for flakes-support) to activate the optimization.
The steps are also documented in the release-notes and in the module
using `warnings`.
`pkgs.hydra` has been removed as latest Hydra doesn't compile with
`pkgs.nixStable` and to ensure a graceful migration using the newly
introduced packages.
To verify the approach, a simple vm-test has been added which verifies
the migration steps.
[1] https://github.com/NixOS/hydra/pull/711
[2] https://github.com/NixOS/nixpkgs/pull/82353#issuecomment-598269471
While our ETag patch works pretty fine if it comes to serving data off
store paths, it unfortunately broke something that might be a bit more
common, namely when using regexes to extract path components of
location directives for example.
Recently, @devhell has reported a bug with a nginx location directive
like this:
location ~^/\~([a-z0-9_]+)(/.*)?$" {
alias /home/$1/public_html$2;
}
While this might look harmless at first glance, it does however cause
issues with our ETag patch. The alias directive gets broken up by nginx
like this:
*2 http script copy: "/home/"
*2 http script capture: "foo"
*2 http script copy: "/public_html/"
*2 http script capture: "bar.txt"
In our patch however, we use realpath(3) to get the canonicalised path
from ngx_http_core_loc_conf_s.root, which returns the *configured* value
from the root or alias directive. So in the example above, realpath(3)
boils down to the following syscalls:
lstat("/home", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
lstat("/home/$1", 0x7ffd08da6f60) = -1 ENOENT (No such file or directory)
During my review[1] of the initial patch, I didn't actually notice that
what we're doing here is returning NGX_ERROR if the realpath(3) call
fails, which in turn causes an HTTP 500 error.
Since our patch actually made the canonicalisation (and thus additional
syscalls) necessary, we really shouldn't introduce an additional error
so let's - at least for now - silently skip return value if realpath(3)
has failed.
However since we're using the unaltered root from the config we have
another issue, consider this root:
/nix/store/...-abcde/$1
Calling realpath(3) on this path will fail (except if there's a file
called "$1" of course), so even this fix is not enough because it
results in the ETag not being set to the store path hash.
While this is very ugly and we should fix this very soon, it's not as
serious as getting HTTP 500 errors for serving static files.
I added a small NixOS VM test, which uses the example above as a
regression test.
It seems that my memory is failing these days, since apparently I *knew*
about this issue since digging for existing issues in nixpkgs, I found
this similar pull request which I even reviewed:
https://github.com/NixOS/nixpkgs/pull/66532
However, since the comments weren't addressed and the author hasn't
responded to the pull request, I decided to keep this very commit and do
a follow-up pull request.
[1]: https://github.com/NixOS/nixpkgs/pull/48337
Signed-off-by: aszlig <aszlig@nix.build>
Reported-by: @devhell
Acked-by: @7c6f434c
Acked-by: @yorickvP
Merges: https://github.com/NixOS/nixpkgs/pull/80671
Fixes: https://github.com/NixOS/nixpkgs/pull/66532
Dropbear lags behind OpenSSH significantly in both support for modern
key formats like `ssh-ed25519`, let alone the recently-introduced
U2F/FIDO2-based `sk-ssh-ed25519@openssh.com` (as I found when I switched
my `authorizedKeys` over to it and promptly locked myself out of my
server's initrd SSH, breaking reboots), as well as security features
like multiprocess isolation. Using the same SSH daemon for stage-1 and
the main system ensures key formats will always remain compatible, as
well as more conveniently allowing the sharing of configuration and
host keys.
The main reason to use Dropbear over OpenSSH would be initrd space
concerns, but NixOS initrds are already large (17 MiB currently on my
server), and the size difference between the two isn't huge (the test's
initrd goes from 9.7 MiB to 12 MiB with this change). If the size is
still a problem, then it would be easy to shrink sshd down to a few
hundred kilobytes by using an initrd-specific build that uses musl and
disables things like Kerberos support.
This passes the test and works on my server, but more rigorous testing
and review from people who use initrd SSH would be appreciated!
This mirrors the behaviour of systemd - It's udev that parses `.link`
files, not `systemd-networkd`.
This was originally applied in 36ef112a47,
but was reverted due to 1115959a8d causing
evaluation errors on hydra.
...even when networkd is disabled
This reverts commit ce78f3ac70, reversing
changes made to dc34da0755.
I'm sorry; Hydra has been unable to evaluate, always returning
> error: unexpected EOF reading a line
and I've been unable to reproduce the problem locally. Bisecting
pointed to this merge, but I still can't see what exactly was wrong.
- Fix misspelled option. mkRenamedOptionModule is not used because the
option hasn't really worked before.
- Add missing cfg.telemetryPath arg to ExecStart.
- Fix mkdir invocation in test.
Add a cage module to nixos. This can be used to make kiosk-style
systems that boot directly to a single application. The user (demo by
default) is automatically logged in by this service and the
program (xterm by default) is automatically started.
This is useful for some embedded, single-user systems where we want
automatic booting. To keep the system secure, the user should have
limited privileges.
Based on the service provided in the Cage wiki here:
https://github.com/Hjdskes/cage/wiki/Starting-Cage-on-boot-with-systemd
Co-Authored-By: Florian Klink <flokli@flokli.de>
* prometheus-nginx-exporter: 0.5.0 -> 0.6.0
* nixos/prometheus-nginx-exporter: update for 0.6.0
Added new option constLabels and updated virtualHost name in the
exporter's test.
This is useful when buildLayeredImage is called in a generic way
that should allow simple (base) images to be built, which may not
reference any store paths.
I am not sure how this ever passed on hydra but 30s is barely enough to
pass the configure phase of opensmtpd. It is likely the package was
built as part of another jobset. Whenever it is built as part of the
test execution the timeout propagates and 30s is clearly not enough for
that.
The subtest was mainly written to demonstrate the VRF-issues with a
5.x-kernel. However this breaks the entire test now as we have 5.4 as
default kernel. Disabling the test for now, I still need to find some
time to investigate.
Depending on the network management backend being used, if the interface
configuration in stage 1 is not cleared, there might still be some old
addresses or routes from stage 1 present in stage 2 after network
configuration has finished.
This makes predictable interfaces names available as soon as possible
with udev by adding the default network link units to initrd which are read
by udev. Also adds some udev rules that are needed but which would normally
loaded from the udev store path which is not included in the initrd.
invalid test was introduced in 297d1598ef
and it is disabled in the shipped daemon.conf.
I forgot to reflect that in the module, which caused the daemon to print the following on start-up:
FuEngine invalid has incorrect built version invalid
and the command to warn:
WARNING: The daemon has loaded 3rd party code and is no longer supported by the upstream developers!
To reduce the change of this happening in the future, I moved the list of default disabled plug-ins to the package expression.
I also set the value of the NixOS module option in the config section of the module instead of the default value used previously,
which will allow users to not care about these plug-ins.
In 87a19e9048 I merged staging-next into master using the GitHub gui as intended.
In ac241fb7a5 I merged master into staging-next for the next staging cycle, however, I accidentally pushed it to master.
Thinking this may cause trouble, I reverted it in 0be87c7979. This was however wrong, as it "removed" master.
This reverts commit 0be87c7979.
I merged master into staging-next but accidentally pushed it to master.
This should get us back to 87a19e9048.
This reverts commit ac241fb7a5, reversing
changes made to 76a439239e.
The old Quake3 NixOS test was removed in
50ea99cbc1 which served as a nice demo to
showcase what NixOS tests are capable of. This commit adds the same
functionality to run real openarena clients.
This module allows root autoLogin, so we would break that for users, but
they shouldn't be using it anyways. This gives the impression like auto
is some special display manager, when it's just lightdm and special pam
rules to allow root autoLogin. It was created for NixOS's testing
so I believe this is where it belongs.
- the `imageFile` option allows to load an image from a derivation
- the `dependsOn` option can be used to specify dependencies between container systemd units.
Co-authored-by: Christian Höppner <mkaito@users.noreply.github.com>
* nixos/buildkite: drop user option
This reverts 8c6b1c3eaa.
Turns out, buildkite-agent has logic to write .ssh/known_hosts files and
only really works when $HOME and the user homedir are in sync.
On top of that, we provision ssh keys in /var/lib/buildkite-agent, which
doesn't work if that other users' homedir points elsewhere (we can cheat
by setting $HOME, but then getent and $HOME provide conflicting
results).
So after all, it's better to only run the system-wide buildkite agent as
the "buildkite-agent" user only - if one wants to run buildkite as
different users, systemd user services might be a better fit.
* nixosTests.buildkite-agent: add node with separate user and no ssh key
The current nixpkgs use elasticsearch-curator 5.8.1. As of version 5.7.0,
elasticsearch-curator supports elasticsearch 7, thus this change enables tests
with ES 7.
Updates required:
- Use vpc image format (new default, supported by Amazon)
- Pass full image filename to makeEc2Test
- Increase memory allocation for nixos-rebuild
- Set a networking.hostName for services.httpd
- Add appropriate escaping in literal userdata
While I'm here, try to make it fail fast.
This fixes the patch for nginx to clear the Last-Modified header if a
static file is served from the Nix store.
So far we only used the ETag from the store path, but if the
Last-Modified header is always set to "Thu, 01 Jan 1970 00:00:01 GMT",
Firefox and Chrome/Chromium seem to ignore the ETag and simply use the
cached content instead of revalidating.
Alongside the fix, this also adds a dedicated NixOS VM test, which uses
WebDriver and Firefox to check whether the content is actually served
from the browser's cache and to have a more real-world test case.
This is what I've suspected a while ago[1]:
> Heads-up everyone: After testing this in a few production instances,
> it seems that some browsers still get cache hits for new store paths
> (and changed contents) for some reason. I highly suspect that it might
> be due to the last-modified header (as mentioned in [2]).
>
> Going to test this with last-modified disabled for a little while and
> if this is the case I think we should improve that patch by disabling
> last-modified if serving from a store path.
Much earlier[2] when I reviewed the patch, I wrote this:
> Other than that, it looks good to me.
>
> However, I'm not sure what we should do with Last-Modified header.
> From RFC 2616, section 13.3.4:
>
> - If both an entity tag and a Last-Modified value have been
> provided by the origin server, SHOULD use both validators in
> cache-conditional requests. This allows both HTTP/1.0 and
> HTTP/1.1 caches to respond appropriately.
>
> I'm a bit nervous about the SHOULD here, as user agents in the wild
> could possibly just use Last-Modified and use the cached content
> instead.
Unfortunately, I didn't pursue this any further back then because
@pbogdan noted[3] the following:
> Hmm, could they (assuming they are conforming):
>
> * If an entity tag has been provided by the origin server, MUST
> use that entity tag in any cache-conditional request (using If-
> Match or If-None-Match).
Since running with this patch in some deployments, I found that both
Firefox and Chrome/Chromium do NOT re-validate against the ETag if the
Last-Modified header is still the same.
So I wrote a small NixOS VM test with Geckodriver to have a test case
which is closer to the real world and I indeed was able to reproduce
this.
Whether this is actually a bug in Chrome or Firefox is an entirely
different issue and even IF it is the fault of the browsers and it is
fixed at some point, we'd still need to handle this for older browser
versions.
Apart from clearing the header, I also recreated the patch by using a
plain "git diff" with a small description on top. This should make it
easier for future authors to work on that patch.
[1]: https://github.com/NixOS/nixpkgs/pull/48337#issuecomment-495072764
[2]: https://github.com/NixOS/nixpkgs/pull/48337#issuecomment-451644084
[3]: https://github.com/NixOS/nixpkgs/pull/48337#issuecomment-451646135
Signed-off-by: aszlig <aszlig@nix.build>
When using format-strings, curly brackets need to be escaped using `{{`
to avoid errors from python.
And apparently, Perl's `==` is used to compare substrings[1] which is why
the translation to `assert http_code == "304"` failed as the string
contains several headers from curl.
[1] Just check `perl <(echo 'die "alarm" if "foo\n304" == 304')`
We've rewritten it use GDM, and we can now autologin
to the X11 session because of the accountsservice preStart
script for autologin. It should work similar to how the wayland
test works, just with a few nuanced differences for xorg.
The upstream session files display managers use have no concept of sessions being composed from
desktop manager and window manager. To be able to set upstream session files as default
session, we need a single option. Having two different ways to set default session would be confusing,
though, so we decided to deprecate the old method.
We also created separate script for each session, just like we already had a separate desktop
file for each one, and started using displayManager.sessionPackages mechanism to make the
session handling more uniform.
There's two ways of providing graphical sessions now:
- `displayManager.session` via. `desktopManager.session` and
`windowManager.session`
- `displayManager.sessionPackages`
`sessionPackages` doesn't make a distinction between desktop and window
managers. This makes selecting a session provided by a package using
`desktopManager.default` nonsensical.
We therefor introduce `displayManager.defaultSession` which can select a session
from either `displayManager.session` or `displayManager.sessionPackages`.
It will default to `desktopManager.default + windowManager.default` as before.
If the dm default is "none" it will select the first provided session from
`sessionPackages`.
This reduces the length of the gitea-test by creating a single
`makeGiteaTest` function which creates the configuration for a testcase
with a given database driver.