After upgrading to 3.5.0 we noticed, that registering would redirect to
the login page and not work at all. At the same time the admin user was
unable to access its user settings.
This issue could be tracked back to the template cache, that must be
invalidated between release upgrades.
provision # [ 8.223448] (kanidmd)[819]: kanidm.service: Failed to set up mount namespacing: /ofborg/checkout/repo/38dca4e3aa6bca43ea96d2fcc04e8229/builder/ofborg-evaluator-1/nixos/tests/common/acme/server:
No such file or directory
This should fix errors where /etc is reported to be busy and thus cannot
be unmounted.
Another solution we can consider if this doesn't work out as we expect
is to forcefully unmount /etc.
I've observed that sometimes the overlay mount unit does not get started
when using wantedBy. requiredBy makes this relationship stricter and if
necessary will restart the initrd-fs.target and thus ensure that when
this target is reached /etc has alredy been mounted. This is in line
with the description of initrd-fs.target in systemd.special:
> Thus, once this target is reached the /sysroot/ hierarchy is fully set up
This was broken by the Rust 1.80 upgrade, and is an old version that
we’d have to patch to keep working.
We have already done the 0.4 → 0.5 update without keeping around
the old version or adding in any additional `stateVersion` logic
in <https://github.com/NixOS/nixpkgs/pull/280221>. As a result,
migration for 0.3 users is going to be a little awkward. I’ve done
my best to provide comprehensive instructions for anyone who hasn’t
already bumped to 0.4.
It is probably a footgun to add `stateVersion` logic for any
package that makes backwards‐incompatible schema changes and only
supports migration from the immediately previous version. Users
won’t get migrated by default and we have to either package and
maintain an endlessly growing list of old versions or add complicated
instructions like this. It’s not really practical for us to support
a significantly better migration story than upstream does.
- Split desktop into desktop-basics (basic keybind & app launching) and
desktop-appinteractions (one applications triggering something in another) due to timeouts
- Wrap machine.wait_for_text to wait 10 seconds before starting
The 10 second delay improves runtime dramatically on weaker hardware. In desktop-ayatana-indicators
on my aarch64 laptop, runtime was cut down by 818,41 seconds (~ 14 minutes).
Hopefully helps abit with timeout issues on ARM :(
[UWSM](https://github.com/Vladimir-csp/uwsm) is a session manager that wraps a wayland
window compositor with useful systemd units like `graphical-session-pre.target`,
`graphical-session.target`, `xdg-desktop-autostart.target`.
This is useful for Wayland Compositors that do not start
these units on these own.
Example for Hyprland:
```nix
programs.hyprland.enable = true;
programs.uwsm.enable = true;
programs.uwsm.waylandCompositors = {
hyprland = {
compositorPrettyName = "Hyprland";
compositorComment = "Hyprland compositor managed by UWSM";
compositorBinPath = "/run/current-system/sw/bin/Hyprland";
};
};
```
Co-authored-by: Kai Norman Clasen <k.clasen@protonmail.com>
The mkfs.erofs utility has a lot of output by default that slows down
running tests. We don't need to capture any of the output from
mkfs.erofs, so we can suppress it.
The cgitrc file allows list of values to be set for some keys as
multiple definition lines.
This allows setting multiple "css" files to include or multiple
"readme" branch and file lookup paths to be set.
The minimum reproduction for the problem I'm trying to solve is that
the following NixOS test with a trivial NixOS container:
```
{ inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/24.05";
flake-utils.url = "github:numtide/flake-utils/v1.0.0";
};
outputs = { flake-utils, nixpkgs, self, ... }:
flake-utils.lib.eachDefaultSystem (system: {
checks.default = nixpkgs.legacyPackages."${system}".nixosTest {
name = "test";
nodes.machine.containers.tutorial.config = { };
testScript = "";
};
});
}
```
… fails with the following error message:
```
error: Neither nodes.machine.nixpkgs.hostPlatform nor the legacy option nodes.machine.nixpkgs.system has been set.
You can set nodes.machine.nixpkgs.hostPlatform in hardware-configuration.nix by re-running
a recent version of nixos-generate-config.
The option nodes.machine.nixpkgs.system is still fully supported for NixOS 22.05 interoperability,
but will be deprecated in the future, so we recommend to set nodes.machine.nixpkgs.hostPlatform.
```
The root of the problem appears to be that in
`nixos/modules/virtualisation/nixos-containers.nix` there is support
for deriving the guest's `nixpkgs.hostPlatform` or
`nixpkgs.localSystem` from the corresponding host's values, but this
doesn't work if the host sets `nixpkgs.pkgs` instead of one of those
values. In fact, this is what happens when using `pkgs.nixosTest`
(which sets `nixpkgs.pkgs` in
`pkgs/build-support/testers/default.nix`).
The solution I went with was to forward the `nixpkgs.pkgs` setting from
the host to the guest, but only if it is defined (matching the same
treatment as `nixpkgs.hostPlatform` and `nixpkgs.localSystem`.
This splits the 3rdparty drivers into seperate
packages as recommended by upstream. This also
allows to build a indi-full equivalent with only
the needed drivers. Also add indi-full-nonfree
with all the nonfree drivers. And remove them
from indi-full.
https://forgejo.org/docs/latest/user/authentication/#pam-pluggable-authentication-module
PAM support has to be enabled at compile time and upstream considers it
opt-in.
Official upstream binaries have it disabled.
We enabled it by default because we simply inherited most of it from
Gitea when the split in nixpkgs happened.
Reasons why it had been enabled in nixpkgs for Gitea are unknown.
See 9406f240a7.
There is reason to believe not a single Forgejo instance running on
NixOS uses this feature because it literally segfaults due to our
sandboxing.
Although CDI should be used in order to not require container runtime
wrappers anymore, fix the nvidia-container-runtime integration with
Docker for cases when Docker < 25.
Although kubectl has builtin JSONpath support, it is only supported
partially and the support varies between different versions. While using
JSONpath in tests worked for some versions, it failed for others. This
contribution replaces the problematic JSONpath usages with the jq JSON
processor.
This reverts commit 93b6400ff5.
Putting chromium in the system closure by enabling the module breaks
previous expectations of module users. Previously, this would create
policy files for chromium, google-chrome and brave as chromium based
browsers.
Use cases relating to a browser other than chromium directly as well
as only using home-manager's module system to configure other aspects
of a chromium package (various use cases require overriding inputs
to the chromium derivation) were not covered by this breaking change.
More design is needed before having policy and package options coexist
properly in this module.
Now it's possible to start multiple mailpit instances - for e.g.
multiple testing environments - on the same machine:
{
services.mailpit.instances = {
dev = { /* ... */ };
staging = { /* ... */ };
};
}
The simplest way to start a single instance is by declaring
services.mailpit.instances.default = {};
Implementation is now compatible with the option's .type already defined.
This allows us to pass `config.users.users.<user>.hashedPassword` even if this is null (the default).
Before:
true => access
false => no access
hash => access via password
null => eval error
After:
true => access
false => no access
hash => access via password
null => no access
Miniflux supports provisioning users via SSO, which renders admin
accounts unnecessary for some use-cases. This change retains the
existing default, but makes it easier to disable admin provisioning.
This adds support for declaring tmpfiles rules exclusively for the
systemd initrd. Configuration is possible through the new option
`boot.initrd.systemd.tmpfiles.settings` that shares the same interface as
`systemd.tmpfiles.settings`.
I did intentionally not replicate the `rules` interface here, given that
the settings attribute set is more versatile than the list of strings
used for `rules`. This should also make it unnecessary to implement the
workaround from 1a68e21d47 again.
A self-contained `tmpfiles.d` directory is generated from the new initrd
settings and it is added to the initrd as a content path at
`/etc/tmpfiles.d`.
The stage-1 `systemd-tmpfiles-setup.service` is now altered to no longer
operate under the `/sysroot` prefix, because the `/sysroot` hierarchy
cannot be expected to be available when the default upstream service is
started.
To handle files under `/sysroot` a slightly altered version of the
upstream default service is introduced. This new unit
`systemd-tmpfiles-setup-sysroot.service` operates only under the
`/sysroot` prefix and it is ordered between `initrd-fs.target` and the
nixos activation.
Config related to tmpfiles was moved from initrd.nix to tmpfiles.nix.
For the open driver, the `nvidia-uvm` module does not auto-load after
`nvidia`, which makes CUDA not work.
In this case, we need to add it to `boot.kernelModules` for it to work
again.
In #327506, we stopped using `/sbin` in the `pathsToLink` of `initrdBinEnv`. This inadvertantly stopped including the `sbin` directory of the `initrdBin` packages, which meant that things like `mdadm`'s udev rules, which referred to binaries by their `sbin` paths, stopped working.
The purpose of #327506 was to fix the fact that `mount` was not calling mount helpers like `mount.ext4` unless they happened to be in `/sbin`. But this raised some questions for me, because I thought we set `managerEnvironment.PATH` to help util-linux find helpers for both `mount` and `fsck`. So I decided to look at how this works in stage 2 to figure it out, and it's a little cursed.
---
What I already knew is that we have [this](696a4e3758/nixos/modules/system/boot/systemd.nix (L624-L625))
```
# util-linux is needed for the main fsck utility wrapping the fs-specific ones
PATH = lib.makeBinPath (config.system.fsPackages ++ [cfg.package.util-linux]);
```
And I thought this was how `mount` finds the mount helpers. But if that were true, then `mount` should be finding helpers in stage 1 because of [this](696a4e3758/nixos/modules/system/boot/systemd/initrd.nix (L411))
```
managerEnvironment.PATH = "/bin";
```
Turns out, `mount` _actually_ finds helpers with [this configure flag](696a4e3758/pkgs/os-specific/linux/util-linux/default.nix (L59))
```
"--enable-fs-paths-default=/run/wrappers/bin:/run/current-system/sw/bin:/sbin"
```
Ok... so then why do we need the PATH? Because `fsck` has [this](a75c7a102e/disk-utils/fsck.c (L1659))
```
fsck_path = xstrdup(path && *path ? path : FSCK_DEFAULT_PATH);
```
(`path` is `getenv("PATH")`)
So, tl;dr, `mount` and `fsck` have completely unrelated search paths for their helper programs
For `mount`, we have to use a configure flag to point to `/run/current-system`, and for `fsck` we can just set PATH
---
So, for systemd stage 1, we *do* want to include packages' `sbin` paths, because of the `mdadm` problem. But for `mount`, we need helpers to be on the search path, and right now that means putting it somewhere in `/run/wrappers/bin:/run/current-system/sw/bin:/sbin`.