test failed because gnutls-cli does not properly report connection
errors any more, fixed by increasing the debug level for gnutls-cli
Fixes: #84507Closes: #90718
This can be used to explicitly specify a specific dtb file, relative to
the dtb base.
Update the generic-extlinux-compatible module to make use of this option.
Some bootloaders might not properly detect the model.
If the specific model is known by configuration, provide a way to
explicitly point to a specific dtb in the extlinux.conf.
This option exposes the builder command used to populate an image,
honoring all options except the -c <path-to-default-configuration>
argument.
Useful to have for sdImage.populateRootCommands.
Special care needs to be taken w.r.t cross - the populate command runs
on the host platform, the activation script on the build platform (so
the builders differ)
Keeping the VM state test across several run sometimes lead to subtle
and hard to spot errors in practice. We delete the VM state which
contains (among other things) the qcow volume.
We also introduce a -K (--keep-vm-state) flag making VM state to
persist after the test run. This flag makes test-driver.py to match
its previous behaviour.
Turns out, on smaller images (~800MiB uncompressed sdcard image size),
the current fudge factor is way too small to even get the system to the
phase where it can resize itself.
I first tried with 1.05, but it wasn't enough.
Merges https://github.com/NixOS/nixos-hardware into nixpkgs.
This will give better discoverability, and considering the low
turnover (less than 100 commits in the last year and only 350 total)
it won’t make any dent on the size of nixpkgs.
We have a monorepo, let’s use it
It is going to be merged into nixpkgs, under `nixos/hardware`.
This will give better discoverability, and considering the low
turnover (less than 100 commits in the last year and only 350 total)
it won’t make any dent on the size of nixpkgs.
We have a monorepo, let’s use it.
Enhance the heuristics to make sure that a user doesn't accidentally
upgrade across two major versions of Nextcloud (e.g. from v17 to v19).
The original idea/discussion has been documented in the nixpkgs manual[1].
This includes the following changes:
* `nextcloud19` will be selected automatically when having a stateVersion
greater or equal than 20.09. For existing setups, the package has to
be selected manually to avoid accidental upgrades.
* When using `nextcloud18` or older, a warning will be thrown which recommends
upgrading to `nextcloud19`.
* Added a brief paragraph about `nextcloud19` in the NixOS 19.09 release
notes.
* Restart `phpfpm` if the Nextcloud-package (`cfg.package`) changes[2].
[1] https://nixos.org/nixos/manual/index.html#module-services-nextcloud-maintainer-info
[2] https://github.com/NixOS/nixpkgs/pull/89427#issuecomment-638885727
This option exposes the prefconfigured nextcloud-occ
program. nextcloud-occ can then be used in other systemd services or
added in environment.systemPackages.
The nextcloud test shows how it can be add in
environment.systemPackages.
The nix store more-or-less requires o+rx on all parent directories.
This is primarily because nix runs builders in a uid/gid mapped
user-namespace, and those builders have to be able to operate on the nix
store.
This check is especially helpful because nix does not produce a helpful
error on its own (rather, creating directories and such works, it's not
until 'mount --bind' that it gets an EACCES).
Helps users who run into this opaque error, such as in #67465.
Possibly fixes that issue if bad permissions were the only cause.
Turns out, #75510 was too restrictive.
We also need to allow str here, as some modules set this to
"/run/wrappers" to bring `/run/wrappers/bin` into $PATH of a unit.
- Add serve.enable option, which configures uwsgi and nginx to serve
the mailman-web application;
- Configure services to log to the journal, where possible. Mailman
Core does not provide any options for this, but will now log to
/var/log/mailman;
- Use a unified python environment for all components, with an
extraPackages option to allow use of postgres support and similar;
- Configure mailman's postfix module such that it can generate the
domain and lmtp maps;
- Fix formatting for option examples;
- Provide a mailman-web user to run the uwsgi service by default
- Refactor Hyperkitty's periodic jobs to reduce repetition in the
expressions;
- Remove service dependencies not related to functionality included in
the module, such as httpd -- these should be configured in user config
when used;
- Move static files root to /var/lib/mailman-web-static by default. This avoids
permission issues when a static file web server attempts to access
/var/lib/mailman which is private to mailman. The location can still
be changed by setting services.mailman.webSettings.STATIC_ROOT;
- Remove the webRoot option, which seems to have been included by
accident, being an unsuitable directory for serving via HTTP.
- Rename mailman-web.service to mailman-web-setup.service, since it
doesn't actually serve mailman-web. There is now a
mailman-uwsgi.service if serve.enable is set to true.
Since Buildbot 0.9.0, status targets were deprecated and ignored.
There's a very small line on startup explaining that, and status simply
isn't reported. Avoid others the same headaches, and do it right in the
NixOS module.
As there might have been changes in the way reporters are organized, and
configuration might need to be migrated remove the old option, and not
just provide an alias.
Previously the NixOS-specific configuration for man-db was in the
package itself and /etc/man.conf was completely ignored.
This change moves it to /etc/man_db.conf, making declarative
configuration practical again.
It's now possible to generate the mandb caches for all packages
installed through NixOS `environment.systemPackages` at build-time.
The standard location for the stateful cache (/var/cache/man) is also
configured to allow users to run `mandb` manually if they wish.
Since generating the cache can be expensive the option is off by
default.
In /etc/sudoers, the last-matched rule will override all
previously-matched rules. Thus, make the default rule show up first (but
still allow some wiggle room for a user to `mkBefore` it), before any
user-defined rules.
Done by setting `autopilot.min_quorum = 3`.
Techncially, this would have been required to keep the test correct since
Consul's "autopilot" "Dead Server Cleanup" was enabled by default (I believe
that was in Consul 0.8). Practically, the issue only occurred with our NixOS
test with releases >= `1.7.0-beta2` (see #90613). The setting itself is
available since Consul 1.6.2.
However, this setting was not documented clearly enough for anybody to notice,
and only the upstream issue https://github.com/hashicorp/consul/issues/8118
I filed brought that to light.
As explained there, the test could also have been made pass by applying the
more correct rolling reboot procedure
-m.wait_until_succeeds("[ $(consul members | grep -o alive | wc -l) == 5 ]")
+m.wait_until_succeeds(
+ "[ $(consul operator raft list-peers | grep true | wc -l) == 3 ]"
+)
but we also intend to test that Consul can regain consensus even if
the quorum gets temporarily broken.
The systemd socket unit files now more precisely track the IPFS
configuration, by including any multaddr they can make a `ListenStream`
for. (The daemon doesn't currently support anything which would use
`ListDatagram`, so we don't need to worry about that.)
The tests use some of these features.
Specifying mailboxes as a list isn't a good approach since this makes it
impossible to override values. For backwards-compatibility, it's still
possible to declare a list of mailboxes, but a deprecation warning will
be shown.
Virtualbox recommends VMSVGA for Linux guests.
It is also currently the only one supporting 3D acceleration
and it works out of the box with NixOS and auto screen resizing.
VMSGVA is recommended by virtualbox for Linux clients.
Compared to VBoxVGA and VBoxSVGA it also supports 3D acceleration.
Adding the driver makes nixos work with all three supported graphics card
types.
We need to keep the passthru.filesInstalledToEtc and passthru.defaultBlacklistedPlugins in sync with the package contents so let's add a test to enforce that.
Since cd1dedac67 systemd-networkd has it's
netlink socket created via a systemd.socket unit. One might think that
this doesn't make much sense since networkd is just going to create it's
own socket on startup anyway. The difference here is that we have
configuration-time control over things like socket buffer sizes vs
compile-time constants.
For larger setups where networkd has to create a lot of (virtual)
devices the default buffer size of currently 128MB is not enough.
A good example is a machine with >100 virtual interfaces (e.g.,
wireguard tunnels, VLANs, …) that all have to be brought up during
startup. The receive buffer size will spike due to all the generated
message from the new interfaces. Eventually some of the message will be
dropped since there is not enough (permitted) buffer space available.
By having networkd start through / with a netlink socket created by
systemd we can configure the `ReceiveBufferSize` parameter in the socket
options without recompiling networkd.
Since the actual memory requirements depend on hardware, timing, exact
configurations etc. it isn't currently possible to infer a good default
from within the NixOS module system. Administrators are advised to
monitor the logs of systemd-networkd for `rtnl: kernel receive buffer
overrun` spam and increase the memory as required.
Note: Increasing the ReceiveBufferSize doesn't allocate any memory. It
just increases the upper bound on the kernel side. The memory allocation
depends on the amount of messages that are queued on the kernel side of
the netlink socket.
Reads a bit more naturally, and now the changes to the
acme-${cert}.service actually reflect what would be needed were you to
do the same in production.
e.g. "for dns-01, your service that needs the cert needs to pull in the
cert"
udev gained native support to handle FIDO security tokens, so we don't
need a module which only added the now obsolete udev rules.
Fixes: https://github.com/NixOS/nixpkgs/issues/76482
Upstream has this alias too; so that dbus activation works.
What I don't fully understand is why this would ever be useful given
this unit is already started way in early boot; even before dbus is up.
But lets just keep behaviour similar to upstream and then ask these
questions to upstream.
With this systemd buffers netlink messages in early boot from the kernel
itself; and passes them on to networkd for processing once it's started.
Makes sure no routing messages are missed.
Also makes an alias so that dbus can activate this unit. Upstream has
this too.
This will make dbus socket activation for it work
When `systemd-resolved` is restarted; this would lead to unavailability
of DNS lookups. You're supposed to use DBUS socket activation to buffer
resolved requests; such that restarts happen without downtime
This makes it possible to only start IPFS when needed. So a user’s
IPFS daemon only starts when they actually use it.
A few important warnings though:
- This probably shouldn’t be mixed with services.ipfs.autoMount
since you want /ipfs and /ipns aren’t activated like this
- ipfs.socket assumes that you are using ports 5001 and 8080 for the
API and gateway respectively. We could do some parsing to figure
out what is in apiAddress and gatewayAddress, but that’s kind of
difficult given the nonstandard address format.
- Apparently? this doesn’t work with the --api commands used in the tests.
Of course you can always start automatically with startWhenNeeded =
false, or just running ‘systemctl start ipfs.service’.
Tested with the following test (modified from tests/ipfs.nix):
import ./make-test-python.nix ({ pkgs, ...} : {
name = "ipfs";
nodes.machine = { ... }: {
services.ipfs = {
enable = true;
startWhenNeeded = true;
};
};
testScript = ''
start_all()
machine.wait_until_succeeds("ipfs id")
ipfs_hash = machine.succeed("echo fnord | ipfs add | awk '{ print $2 }'")
machine.succeed(f"ipfs cat /ipfs/{ipfs_hash.strip()} | grep fnord")
'';
})
Fixes#90145
Update nixos/modules/services/network-filesystems/ipfs.nix
Co-authored-by: Florian Klink <flokli@flokli.de>
Previously we had three services for different config flavors. This is
confusing because only one instance of IPFS can run on a host / port
combination at once. So move all into ipfs.service, which contains the
configuration specified in services.ipfs.
Also remove the env wrapper and just use systemd env configuration.
This should have been done initially, as otherwise it gets awfully
awkward to boot into new generations by default.
This system-specific image wasn't expected to be long-lived, thus why it
didn't end up being polished much.
Reality shows us we may be stuck with it for a bit longer, so let's make
it easier to use for new users.
The way this ends up being called with the raspberry pi 4 image builder
ends up not using the `-e` from the shebang.
In turn, the builds fails during cross-compilation. The wrong coreutils
ends up being used, but this is not made apparent.
The issue I faced is already fixed on master, but this ensures no one
ends up with a failed build "succeeding".
The default `undervolt` package does not accept floating point numbers for any of its numeric
arguments. This also mentions in what units are the values expressed.
The setgid is currently required for offline enqueuing, and
unfortunately smtpctl is currently not split from sendmail so there's
little running around it.
The OC_PASS environment variable can be used to create a user with
`occ user:add --password-from-env`. It is currently not possible to
use the `nextcloud-occ` to "non-interactively" create a user since
this variable is ignored by sudo.
This switches the unit to Restart=on-failure and switches the CPU policy
to fifo (the daemon tries to do that itself, but is denied permission).
Also add the package to $PATH to be able to use fs_cli easily.
It's pbPort, and it's also a connection string, meaning
listen-on-localhost is also possible. Provide an alias for the old
option name, so old configs still work.
This patch was done by curro:
The generated /etc/pam.d/* service files invoke the pam_systemd.so
session module before pam_mount.so, if both are enabled (e.g. via
security.pam.services.foo.startSession and
security.pam.services.foo.pamMount respectively).
This doesn't work in the most common scenario where the user's home
directory is stored in a pam-mounted encrypted volume (because systemd
will fail to access the user's systemd configuration).
This fixes a regression from 993baa587c which requires
networking.hostName to be a valid DNS label [0].
Unfortunately we missed the fact that the hostnames may also be empty,
if the user wants to obtain it from a DHCP server. This is even required
by a few modules/images (e.g. Amazon EC2, Azure, and Google Compute).
[0]: https://github.com/NixOS/nixpkgs/pull/76542#issuecomment-638138666
Fixes this warning at ibus-daemon startup:
(ibus-dconf:15691): dconf-WARNING **: 21:49:24.018: unable to open file '/etc/dconf/db/ibus': Failed to open file ?/etc/dconf/db/ibus?: open() failed: No such file or directory; expect degraded performance
xchg is advertised as a bidirectional exchange dir, but file content
transfer from host to VM fails due to caching:
If a file is read in the VM and then modified on the host, subsequent
re-reads in the VM can yield old, cached data.
This is caused by the use of 9p's cache=loose mode that is explicitly
meant for read-only mounts.
9p doesn't provide any suitable cache modes, so fix this by disabling
caching.
Also, remove a now unnecessary sync in the test driver.
This effectively disables nscd's built-in hosts cache, which turns out
to be erratic in some cases.
We only use nscd these days as a more ABI-neutral NSS dispatcher
mechanism.
Local caching should still be possible with local resolvers in
/etc/resolv.conf (via the `dns` NSS module), or without local resolvers
via systemd-networkd (via the `resolve` nss module)
We don't set enable-cache to no due to
https://github.com/NixOS/nixpkgs/pull/50316#discussion_r241035226.
Refactor the systemd service definition for the haproxy reverse proxy,
using the upstream systemd service definition. This allows the service
to be reloaded on changes, preserving existing server state, and adds
some hardening options.
These syncs have the goal to transfer host filesystem changes to the VM,
but they have no effect because 1) syncing in the VM can't possibly pull
in host data and 2) 9p is accessing the host filesystem on the cached
layer anyways, so even syncing on the host would have no effect in the
VM.