The Nix-provided `nix-daemon.socket` file has a
> ConditionPathIsReadWrite=/nix/var/nix/daemon-socket/socket
line, to skip that unit if /nix/var/nix/daemon-socket/socket is
read-only (which is the case in some nixos-containers with that folder
bind-ro-mounted from the host).
In these cases, the unit was skipped.
Systemd 250 (rightfully) started to also skip in these cases:
> [ 237.187747] systemd[1]: Nix Daemon Socket was skipped because of a failed condition check (ConditionPathIsReadWrite=/nix/var/nix/daemon-socket).
However, systemd < 250 didn't skip if /nix/var/nix/daemon-socket/socket
didn't /exist at all/, and we were relying on this bug in the case for
fresh NixOS systems, to have /nix/var/nix/daemon-socket/socket created
initially.
Move the creation of that folder to systemd-tmpfiles, by shipping an
appropriate file in `${nixPackage}/lib/tmpfiles.d/nix-daemon.conf`
(NixOS/nix#6285).
In the meantime, set a systemd tmpfiles rule manually in NixOS.
This has been tested to still work with read-only bind-mounted
/nix/var/nix/daemon-socket/socket in containers, it'll keep them
read-only ;-)
some change in the last 24 hours altered the behaviour of st such that
it now dies with a non-zero exit code when the shell exits, so kill is
now necessary
For now at least. I expect someone will find a working type later.
It's incorrect and was causing bad issues. Example test case:
nix-instantiate nixos/release.nix -A tests.xfce.x86_64-linux --dry-run
This is a partial revert of commit b2d803c from PR #162271.
hostNames being deprecated makes configuring hosts with multiple keys a
pain. including the attr name of the entry in the host name list is a
nice convenience though, so we'll retain it and clarify the
documentation on how the actual host name list for an entry is put
together.
* _7zz: correct license and remove p7zip dependency
The code under Compress/Rar* is licensed under a specific unRAR license
Also Compress/LzfseDecoder.cpp is covered by BSD3
The unRAR code is removed from the `.tar.xz` since the license posits you
agree or remove the code from your hard drive
This adds some complexity to updating 7zz so there is also an update
script
Meta has been updated and tweaked
Source is now downloaded from sourceforge in the `.tar.xz` version to
avoid depending on p7zip
* _7zz: add notice of the license updates and optional unRAR licenced code
Currently it is only possible to add upstream _system_ units. The option
systemd.additionalUpstreamSystemUnits can be used for this.
However, this was not yet possible for systemd.user. In a similar
fashion this was added to systemd-user.nix.
This is intended to have other modules add upstream units.
Add an exception to the `paperless-ng-server` service's
`SystemCallFilter` as the `mbind` syscall is needed when consuming a
document while having a classification model present.
Since b9cfbcafdf, the lack of hexdump in
the closure lead to the generation of empty cookie files. This empty
cookie file is making pleroma to crash at startup now we correctly
read it.
We introduce a migration forcing these empty cookies to be
re-generated to something not empty.
It was originally impossible to login in toot without having an
interactive shell. I opened https://github.com/ihabunek/toot/pull/180
upstream to fix that and fetch this patch for this test.
The author decided to fix the issue using a slightly different
approach at a3eb5dca24
Because of this upstream fix, our custom patch does not apply anymore.
Using that stdin-based login upstream feature.
We inject the release cookie path to the pleroma derivation in order
to wrap pleroma_ctl with it. Doing this allows us to remove the
systemd-injected RELEASE_COOKIE path, which was sadly
buggy (RELEASE_COOKIE should point to the *content* of the cookie, not
the file containing it).
We take advantage of this to factor out the cookie path.
Fixes race conditions like this:
> systemd[1]: Started prometheus-kea-exporter.service.
> kea-exporter[927]: Listening on http://0.0.0.0:9547
> kea-exporter[927]: Socket at /run/kea/dhcp4.sock does not exist. Is Kea running?
> systemd[1]: prometheus-kea-exporter.service: Main process exited, code=exited, status=1/FAILURE
It doesn't make sense to have a default value for this that's
incompatible with the default locate implementation. It means that
just doing services.locate.enable = true; generates a warning, even if
you don't care about pruning anything. So only use the default prune
list if the locate implementation supports it (i.e., isn't findutils).
If `services.tor.client.enable` is set to false (the default), the `SOCKSPort` option is not added to the torrc file but since Tor defaults to listening on port 9050 when the option is not specified, the tor client is not actually disabled. To fix this, simply set `SOCKSPort` to 0, which disables the client.
Use `mkForce` to prevent potentially two different `SOCKSPort` options in the torrc file, with one of them being 0 as this would cause Tor to fail to start. When `services.tor.client.enable` is set to false, this should always be disabled.
When `services.resolved.enable` is set to true, the file /etc/resolv.conf becomes a symlink to /etc/static/resolv.conf, which is a symlink to /run/systemd/resolve/stub-resolv.conf. Without this commit, tor does not have access to this file thanks to systemd confinement. This results in the following warning when tor starts:
```
[warn] Unable to stat resolver configuration in '/etc/resolv.conf': No such file or directory
[warn] Could not read your DNS config from '/etc/resolv.conf' - please investigate your DNS configuration. This is possibly a problem. Meanwhile, falling back to local DNS at 127.0.0.1.
```
To fix this, simply allow read-only access to the file when resolved is in use.
According to https://github.com/NixOS/nixpkgs/pull/161818#discussion_r824820462, the symlink may also point to /run/systemd/resolve/resolv.conf, so allow that as well.
Use a quoted heredoc to inject installBootLoader safely into the script,
and restore the previous invocation of `system` with a single argument so
that shell commands keep working.
Without this fix, evaluating a NixOS configuration with Tomcat enabled and the
default settings results in the following evaluation error:
Failed assertions:
- users.users.tomcat.group is unset. This used to default to
nogroup, but this is unsafe. For example you can create a group
for this user with:
users.users.tomcat.group = "tomcat";
users.groups.tomcat = {};
As of systemd/systemd@e908434458,
systemd-networkd now automatically configures routes to addresses
specified in AllowedIPs unless explicitly disabled with
"RouteTable=off".
As a novice to using this module, I found the existing description to be
quite misleading. It does not at all disable pulling from the registry,
it just loads some image archive that may or may not be related to the
container you're specifying. I had thought there was extra magic behind
this option, but it's just a `docker load`. You need foreknowledge of
the contents of the archive so that whatever it contained is actually
used to run the container.
I've reworded the description to hopefully make this behavior clearer.
it's really easy to accidentally write the wrong systemd Exec* directive, ones
that works most of the time but fails when users include systemd metacharacters
in arguments that are interpolated into an Exec* directive. add a few functions
analogous to escapeShellArg{,s} and some documentation on how and when to use them.
This bug is so obscure and unlikely that I was honestly not able to
properly write a test for it. What happens is that we are calling
handleModifiedUnit() with $unitsToStart=\%unitsToRestart. We do this to
make sure that the unit is stopped before it's started again which is
not possible by regular means because the stop phase is already done
when calling the activation script.
recordUnit() still gets $startListFile, however which is the wrong file.
The bug would be triggered if an activation script requests a service
restart for a service that has `stopIfChanged = true` and
switch-to-configuration is killed before the restart phase was run. If
the script is run again, but the activation script is not requesting
more restarts, the unit would be started instead of restarted.
The current logic assumes that everything that isn't a derivation is a
store path, but it can also be something that's *coercible* to a store
path, like a flake input.
Unnecessary uses of `lib.toDerivation` result in errors in pure evaluation
mode when `builtins.storePath` is disabled.
Also document what a `package` is.
We spent a whole afternoon debugging this, because upstream has very
bad software quality and the error messages were incredibly
misleading.
So let’s document it for the sanity of other people.
Btw, I think the implementation of our module is pretty brittle,
especially the part about diffing tokens to check whether they
changed. We should rather just request a new builder registration
every time, it’s not that much overhead, and always set `replace` so
it is idempotent.
Follow-up on https://github.com/NixOS/nixpkgs/pull/161426.
Explain why having legacy iptables rules installed can lead to confusing
firewall behaviour, and provide some guidance on how to fix this.
Some systems should not be rebooted at just any time. If the upgrade process takes too long, for instance because of a
slow internet connection, or if the upgrade service is ran during production hours, we want to allow to define a window
outside of which a reboot will not be performed.
The system will then reboot on the next run of the upgrade service which finishes inside the reboot window.
E.g. we can run the update service twice per week, once during the night and once during the day, but reboots are only
allowed during the night. By doing so, a system that is usually shut down during the night will still receive updates
and systems that are turned on 24/7 can be rebooted outside of production hours.
Co-authored-by: Silvan Mosberger <github@infinisil.com>
zsh-autosuggestions supports having fallback strategies expressed
through the ZSH_AUTOSUGGEST_STRATEGY array. For example,
`ZSH_AUTOSUGGEST_STRATEGY=(history completion)`. We should also support
this.
For systems without internet connections, it doesn't make sense to
require the existence of an /etc/resolv.conf file to disable
resolvconf, so let's expose networking.resolveconf.enable as a public
option that can be set to false.
When initializing a system (e.g. first boot / livecd) we have no good
reference source for time. systemd-timesyncd however would revert back
to its configured fallback time (in our case 01.01.1980). Since we
probably don't want to hardcode a specific date as fallback we are now
using the current system time (wherever that might have come from) to
initialize the reference clock file.
The only systems that might be remotely affected by this change are
machines that have highly unreliable RTCs or those where the battery
that backs the RTC is running empty.
Historically these systems always had a tough time with anything time
related and likely required manual intervention.
For stateless systems (those that wipe / between reboots or our
installer CDs) this has the consequence that time will always be reset
to whatever the system comes up with on boot. This is likely the correct
time coming from an RTC. No harm done here the situation is likely
unchanged for them.
For stateful systems (those that retain the / partition across reboots)
there shouldn't be a change at all. They'll provide an initial clock
value once on their lifetime (during first boot / after installation).
From then onwards systemd-timesyncd will update the file with the newer
fallback time (that will be picked up on the next boot).
The cntr sometimes hangs until the 10-hour hydra limit. This behaviour
appears to be an edge-case related to the type of TTY in which the cntr
command runs during test execution. We can work around this by running
the command as a background job.
I additionally added a wait_for_open_port to fix nondeterministic test
failures I observed after fixing the hanging issue.
In issue #157787 @martined wrote:
Trying to use confinement on packages providing their systemd units
with systemd.packages, for example mpd, fails with the following
error:
system-units> ln: failed to create symbolic link
'/nix/store/...-system-units/mpd.service': File exists
This is because systemd-confinement and mpd both provide a mpd.service
file through systemd.packages. (mpd got updated that way recently to
use upstream's service file)
To address this, we now place the unit file containing the bind-mounted
paths of the Nix closure into a drop-in directory instead of using the
name of a unit file directly.
This does come with the implication that the options set in the drop-in
directory won't apply if the main unit file is missing. In practice
however this should not happen for two reasons:
* The systemd-confinement module already sets additional options via
systemd.services and thus we should get a main unit file
* In the unlikely event that we don't get a main unit file regardless
of the previous point, the unit would be a no-op even if the options
of the drop-in directory would apply
Another thing to consider is the order in which those options are
merged, since systemd loads the files from the drop-in directory in
alphabetical order. So given that we have confinement.conf and
overrides.conf, the confinement options are loaded before the NixOS
overrides.
Since we're only setting the BindReadOnlyPaths option, the order isn't
that important since all those paths are merged anyway and we still
don't lose the ability to reset the option since overrides.conf comes
afterwards.
Fixes: https://github.com/NixOS/nixpkgs/issues/157787
Signed-off-by: aszlig <aszlig@nix.build>
This type correctly merges multiple option types together while also
annotating them with file information. In a future commit this will be
used for `_module.freeformType`
Makes all options rendered in the manual throw an error if they don't
have a type specified.
This is a follow-up to #76184
Co-Authored-By: Silvan Mosberger <contact@infinisil.com>
This fixes the following issues with the database provisioning script
included in the services.keycloak module:
- It lacked permission to access the DB password file specified in the
module option 'services.keycloak.database.passwordFile'.
- It prevented Keycloak from starting after the second time if the user
chose MySQL for the database.
This effectively fixes the majority of all VM tests which were broken
because `/dev/vda` (or any other block device) wasn't mountable:
machine # mounting /dev/vda on /...
machine # mount: mounting /dev/vda on /mnt-root/ failed: No such device[ 2.820976] Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000100
machine # [ 2.821757] CPU: 0 PID: 1 Comm: init Not tainted 5.10.72 #1-NixOS
machine # [ 2.821757] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
machine # [ 2.821757] Call Trace:
machine # [ 2.821757] dump_stack+0x6b/0x83
machine # [ 2.821757] panic+0x101/0x2c8
machine # [ 2.821757] do_exit.cold+0x14/0xb3
machine # [ 2.821757] do_group_exit+0x33/0xa0
machine # [ 2.821757] __x64_sys_exit_group+0x14/0x20
machine # [ 2.821757] do_syscall_64+0x33/0x40
machine # [ 2.821757] entry_SYSCALL_64_after_hwframe+0x44/0xa9
machine # [ 2.821757] RIP: 0033:0x7f67ec2800f6
machine # [ 2.821757] Code: 00 4c 8b 0d 2c 5d 11 00 eb 19 66 2e 0f 1f 84 00 00 00 00 00 89 d7 89 f0 0f 05 48 3d 00 f0 ff ff 77 22 f4 89 d7 44 89 c0 0f 05 <48> 3d 00 f0 ff ff 76 e2 f7 d8 64 41 89 01 eb da 66 2e 0f 1f 84 00
machine # [ 2.821757] RSP: 002b:00007fff8f5a71d8 EFLAGS: 00000202 ORIG_RAX: 00000000000000e7
machine # [ 2.821757] RAX: ffffffffffffffda RBX: 0000000000699704 RCX: 00007f67ec2800f6
machine # [ 2.821757] RDX: 0000000000000001 RSI: 000000000000003c RDI: 0000000000000001
machine # [ 2.821757] RBP: 0000000000000004 R08: 00000000000000e7 R09: ffffffffffffff80
machine # [ 2.821757] R10: 00007f67ec33f3e0 R11: 0000000000000202 R12: 000000000000000b
machine # [ 2.821757] R13: 00007fff8f5a75a8 R14: 0000000000000000 R15: 00000000004fc198
machine # [ 2.821757] Kernel Offset: 0x31e00000 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfffffff)
machine # [ 2.821757] Rebooting in 1 seconds..
This happened because the kernel failed to load modules such as `ext4`
from `boot.initrd.availableKernelModules`[1] on e.g. a `mount(2)` syscall.
The problem is that `kmod` isn't linked against `libpthread.so.0`
anymore because it got merged into `libc.so.6` (however, the .so still
exists), but still needs it:
machine # newfstatat(AT_FDCWD, "/nix/store/eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee-glibc-2.34-36/lib/x86_64", 0x7ffd951114c0, 0) = -1 ENOENT (No such file or directory)
machine # openat(AT_FDCWD, "/nix/store/eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee-glibc-2.34-36/lib/x86_64/libpthread.so.0", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
machine # newfstatat(AT_FDCWD, "/nix/store/eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee-glibc-2.34-36/lib/x86_64", 0x7ffd951114c0, 0) = -1 ENOENT (No such file or directory)
machine # openat(AT_FDCWD, "/nix/store/eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee-glibc-2.34-36/lib/libpthread.so.0", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
machine # newfstatat(AT_FDCWD, "/nix/store/eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee-glibc-2.34-36/lib", 0x7ffd951114c0, 0) = -1 ENOENT (No such file or directory)
machine # openat(AT_FDCWD, "/nix/store/eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee-glibc-2.34-36/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
machine # writev(2, [{iov_base="/nix/store/kdc9n48ksdc1a8y8w512w"..., iov_len=69}, {iov_base=": ", iov_len=2}, {iov_base="error while loading shared libra"..., iov_len=36}, {iov_base=": ", iov_len=2}, {iov_base="libpthread.so.0", iov_len=15}, {iov_base=": ", iov_len=2}, {iov_base="cy
machine # ) = 184
machine # exit_group(127) = ?
machine # +++ exited with 127 +++
machine # mount: mounting /dev/vda on /mnt-root/ failed: No such device
machine # [ 19.167180] Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000100
machine # [ 19.167711] CPU: 0 PID: 1 Comm: init Not tainted 5.10.72 #1-NixOS
This is not a problem
* inside stage-1 because `LD_LIBRARY_PATH` points to `$out/lib` of
extra-utils where `libpthread.so.6` also exists.
* on a running system because `${pkgs.glibc}/lib` is part of kmod's
rpath.
However this is a problem inside the kernel which calls `modprobe` (in
our case `kmod`) to load modules and doesn't know about
`LD_LIBRARY_PATH`. Also, the rpath-reference was nuked.
To work around this, the kernel's `modprobe`
(i.e. `/proc/sys/kernel/modprobe`) now points to a wrapper which
explicitly declares `LD_LIBRARY_PATH`. We can't use `makeWrapper` here
because `modprobe` itself must not be renamed. Otherwise, `kmod` (which
is the link-target of `modprobe`) won't work because it expects
`argv[0] == "modprobe"` to perform modprobe's tasks.
[1] https://nixos.org/manual/nixos/stable/options.html#opt-boot.initrd.availableKernelModules
Update version to 1.4.231.
Build 231 points to a specific commit from the 1.4.x branch adding many
fixes and improvements. Since this version is an unofficial release, add
an unstable prefix to the version string in Nixpkgs.
Signed-off-by: Felix Singer <felixsinger@posteo.net>
Signed-off-by: Franz Pletz <fpletz@fnordicwalking.de>
Currently the test-watch.service gets started in a loop as long as
/testpath exists, so `rm /testpath /testpath-modified` runs into a race
condition where if the service was just getting activated, it will
create /testpath-modified and make the test fail.
This is fixed by making the service RemainAfterExit so that it only
starts once, and stopping it manually after we remove /testpath.
logrotate.timer is enough for rotating logs. Enabling logrotate.service would
make the service start on every configuration switch, leading to tests failure when
logrotate is enabled.
Also update test to make sure the timer is active and runs the service
on date change.
https://github.com/ipfs/fs-repo-migrations/releases/tag/v2.0.2
This is pretty much a complete rewrite of the ipfs-migrator package.
In version 2.0.0 a major change was made to the way the migrator works. Before, there was one binary that contained every migration. Now every migration has its own binary. If fs-repo-migrations can't find a required binary in the PATH, it will download it off the internet. To prevent that, build every migration individually, symlink them all into one package and then wrap fs-repo-migrations so it finds the package with all the migrations.
The change to the IPFS NixOS module and the IPFS package is needed because without explicitly specifying a repo version to migrate to, fs-repo-migrations will query the internet to find the latest version. This fails in the sandbox, for example when testing the ipfs passthru tests.
While it may seem like the repoVersion and IPFS version are in sync and the code could be simplified, this is not the case. See https://github.com/ipfs/fs-repo-migrations#when-should-i-migrate for a table with the IPFS versions and corresponding repo versions.
Go 1.17 breaks the migrations, so use Go 1.16 instead. This is also the Go version used in their CI, see 3dc218e300/.github/workflows/test.yml (L4). See https://github.com/ipfs/fs-repo-migrations/pull/140#issuecomment-982715907 for a previous mention of this issue. The issue manifests itself when doing anything with a migration, for example `fs-repo-11-to-12 --help`:
```
panic: qtls.ClientHelloInfo doesn't match
goroutine 1 [running]:
github.com/marten-seemann/qtls-go1-15.init.0()
github.com/marten-seemann/qtls-go1-15@v0.1.1/unsafe.go:20 +0x132
```
Also add myself as a maintainer for this package.
This fixes the test failure discovered in https://github.com/NixOS/nixpkgs/pull/160914.
See https://github.com/ipfs/fs-repo-migrations/issues/148 to read some of my struggles with updating this package.