The minimum reproduction for the problem I'm trying to solve is that
the following NixOS test with a trivial NixOS container:
```
{ inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/24.05";
flake-utils.url = "github:numtide/flake-utils/v1.0.0";
};
outputs = { flake-utils, nixpkgs, self, ... }:
flake-utils.lib.eachDefaultSystem (system: {
checks.default = nixpkgs.legacyPackages."${system}".nixosTest {
name = "test";
nodes.machine.containers.tutorial.config = { };
testScript = "";
};
});
}
```
… fails with the following error message:
```
error: Neither nodes.machine.nixpkgs.hostPlatform nor the legacy option nodes.machine.nixpkgs.system has been set.
You can set nodes.machine.nixpkgs.hostPlatform in hardware-configuration.nix by re-running
a recent version of nixos-generate-config.
The option nodes.machine.nixpkgs.system is still fully supported for NixOS 22.05 interoperability,
but will be deprecated in the future, so we recommend to set nodes.machine.nixpkgs.hostPlatform.
```
The root of the problem appears to be that in
`nixos/modules/virtualisation/nixos-containers.nix` there is support
for deriving the guest's `nixpkgs.hostPlatform` or
`nixpkgs.localSystem` from the corresponding host's values, but this
doesn't work if the host sets `nixpkgs.pkgs` instead of one of those
values. In fact, this is what happens when using `pkgs.nixosTest`
(which sets `nixpkgs.pkgs` in
`pkgs/build-support/testers/default.nix`).
The solution I went with was to forward the `nixpkgs.pkgs` setting from
the host to the guest, but only if it is defined (matching the same
treatment as `nixpkgs.hostPlatform` and `nixpkgs.localSystem`.
This splits the 3rdparty drivers into seperate
packages as recommended by upstream. This also
allows to build a indi-full equivalent with only
the needed drivers. Also add indi-full-nonfree
with all the nonfree drivers. And remove them
from indi-full.
https://forgejo.org/docs/latest/user/authentication/#pam-pluggable-authentication-module
PAM support has to be enabled at compile time and upstream considers it
opt-in.
Official upstream binaries have it disabled.
We enabled it by default because we simply inherited most of it from
Gitea when the split in nixpkgs happened.
Reasons why it had been enabled in nixpkgs for Gitea are unknown.
See 9406f240a7.
There is reason to believe not a single Forgejo instance running on
NixOS uses this feature because it literally segfaults due to our
sandboxing.
Although CDI should be used in order to not require container runtime
wrappers anymore, fix the nvidia-container-runtime integration with
Docker for cases when Docker < 25.
Although kubectl has builtin JSONpath support, it is only supported
partially and the support varies between different versions. While using
JSONpath in tests worked for some versions, it failed for others. This
contribution replaces the problematic JSONpath usages with the jq JSON
processor.
This reverts commit 93b6400ff5.
Putting chromium in the system closure by enabling the module breaks
previous expectations of module users. Previously, this would create
policy files for chromium, google-chrome and brave as chromium based
browsers.
Use cases relating to a browser other than chromium directly as well
as only using home-manager's module system to configure other aspects
of a chromium package (various use cases require overriding inputs
to the chromium derivation) were not covered by this breaking change.
More design is needed before having policy and package options coexist
properly in this module.
Now it's possible to start multiple mailpit instances - for e.g.
multiple testing environments - on the same machine:
{
services.mailpit.instances = {
dev = { /* ... */ };
staging = { /* ... */ };
};
}
The simplest way to start a single instance is by declaring
services.mailpit.instances.default = {};
Implementation is now compatible with the option's .type already defined.
This allows us to pass `config.users.users.<user>.hashedPassword` even if this is null (the default).
Before:
true => access
false => no access
hash => access via password
null => eval error
After:
true => access
false => no access
hash => access via password
null => no access
Miniflux supports provisioning users via SSO, which renders admin
accounts unnecessary for some use-cases. This change retains the
existing default, but makes it easier to disable admin provisioning.
This adds support for declaring tmpfiles rules exclusively for the
systemd initrd. Configuration is possible through the new option
`boot.initrd.systemd.tmpfiles.settings` that shares the same interface as
`systemd.tmpfiles.settings`.
I did intentionally not replicate the `rules` interface here, given that
the settings attribute set is more versatile than the list of strings
used for `rules`. This should also make it unnecessary to implement the
workaround from 1a68e21d47 again.
A self-contained `tmpfiles.d` directory is generated from the new initrd
settings and it is added to the initrd as a content path at
`/etc/tmpfiles.d`.
The stage-1 `systemd-tmpfiles-setup.service` is now altered to no longer
operate under the `/sysroot` prefix, because the `/sysroot` hierarchy
cannot be expected to be available when the default upstream service is
started.
To handle files under `/sysroot` a slightly altered version of the
upstream default service is introduced. This new unit
`systemd-tmpfiles-setup-sysroot.service` operates only under the
`/sysroot` prefix and it is ordered between `initrd-fs.target` and the
nixos activation.
Config related to tmpfiles was moved from initrd.nix to tmpfiles.nix.
For the open driver, the `nvidia-uvm` module does not auto-load after
`nvidia`, which makes CUDA not work.
In this case, we need to add it to `boot.kernelModules` for it to work
again.
In #327506, we stopped using `/sbin` in the `pathsToLink` of `initrdBinEnv`. This inadvertantly stopped including the `sbin` directory of the `initrdBin` packages, which meant that things like `mdadm`'s udev rules, which referred to binaries by their `sbin` paths, stopped working.
The purpose of #327506 was to fix the fact that `mount` was not calling mount helpers like `mount.ext4` unless they happened to be in `/sbin`. But this raised some questions for me, because I thought we set `managerEnvironment.PATH` to help util-linux find helpers for both `mount` and `fsck`. So I decided to look at how this works in stage 2 to figure it out, and it's a little cursed.
---
What I already knew is that we have [this](696a4e3758/nixos/modules/system/boot/systemd.nix (L624-L625))
```
# util-linux is needed for the main fsck utility wrapping the fs-specific ones
PATH = lib.makeBinPath (config.system.fsPackages ++ [cfg.package.util-linux]);
```
And I thought this was how `mount` finds the mount helpers. But if that were true, then `mount` should be finding helpers in stage 1 because of [this](696a4e3758/nixos/modules/system/boot/systemd/initrd.nix (L411))
```
managerEnvironment.PATH = "/bin";
```
Turns out, `mount` _actually_ finds helpers with [this configure flag](696a4e3758/pkgs/os-specific/linux/util-linux/default.nix (L59))
```
"--enable-fs-paths-default=/run/wrappers/bin:/run/current-system/sw/bin:/sbin"
```
Ok... so then why do we need the PATH? Because `fsck` has [this](a75c7a102e/disk-utils/fsck.c (L1659))
```
fsck_path = xstrdup(path && *path ? path : FSCK_DEFAULT_PATH);
```
(`path` is `getenv("PATH")`)
So, tl;dr, `mount` and `fsck` have completely unrelated search paths for their helper programs
For `mount`, we have to use a configure flag to point to `/run/current-system`, and for `fsck` we can just set PATH
---
So, for systemd stage 1, we *do* want to include packages' `sbin` paths, because of the `mdadm` problem. But for `mount`, we need helpers to be on the search path, and right now that means putting it somewhere in `/run/wrappers/bin:/run/current-system/sw/bin:/sbin`.
Since the systemd boot counting PR was merged, dashes in specialisation
names cause issues when installing the boot loader entries, since dashes
are also used as separator for the different components of the file name
of the boot loader entries on disk.
The assertion avoids this footgun which is pretty annoying to recover
from.
With the the Systemd-based initrd, systemd-journald is doing the logging.
One of Journald's Trusted Journal Fields is `_HOSTNAME` (systemd.journal-fields(7)).
Without explicitly setting the hostname via this file or the kernel cmdline, `localhost` is used and captured in the journal.
As a result, a boot's log references multiple hostnames.
With centralized log collection this breaks filtering (more so when logs from multiple Systemd-based initrds are streaming in simultaneously.
Fixes#318907.
> evaluation warning: pkgs.writeText "motd": The second argument should be a string, but it's a null instead, which is deprecated. Use `toString` to convert the value to a string first.
Deprecate singularity-tools.mkLayer and singularity-tools.shellScript,
for they are no longer related to image building.
Use writers.writeBash instead of singularity-tools.shellScript.
Because ARM hardware is starting to have serious issues with completing everything, due to
- A seemingly harmless Lomiri crash & restart early on eating up some time (adding more RAM seemed to have helped with that?), and
- Every OCR usually taking multiple minutes to complete
So start splitting them up into parts
- greeter, for testing just the greeter
- desktop, for general app stuff
- desktop-ayatana-indicators, for checking indicators (OCR-heavy & especially slow)
Currently passing on my hardware, but might need to be split up more in the future.
Lomiri now uses a separate systemd user target for all indicators that should start under Lomiri, because some Ayatana-like indicators do not make sense on non-Lomiri desktops.
Probably temporary, as we should instead encode this data from every indicator's service file into some passthru attribute.
PgBouncer instance running on localhost may not be the on being
monitored in connectionString. Remove checks that forbid valid
configuration from being used and instead document requirements for
PgBouncer configuration when used with the exporter.
This change adds services.pgbouncer.settings option as per [RFC 0042]
and deprecates other options that were previously used to generate
configuration file.
In addition to that, we also place the configuration file under
environment.etc to allow reloading configuration without service
restart.
[RFC 0042]: https://github.com/NixOS/rfcs/blob/master/rfcs/0042-config-option.md
This allows librenms to use socket authentication to the mysql package out of the box if installed under
the same username, avoiding complex DB password initialization steps.