Merge branch 'master' into elm-ghc-update

This commit is contained in:
Marek Fajkus 2024-01-06 13:43:21 +01:00
commit d9ea88c557
No known key found for this signature in database
GPG Key ID: 95585219BA6FE2CC
1766 changed files with 72445 additions and 32272 deletions

7
.github/CODEOWNERS vendored
View File

@ -167,6 +167,8 @@
# Browsers
/pkgs/applications/networking/browsers/firefox @mweinelt
/pkgs/applications/networking/browsers/chromium @emilylange
/nixos/tests/chromium.nix @emilylange
# Certificate Authorities
pkgs/data/misc/cacert/ @ajs124 @lukegb @mweinelt
@ -336,3 +338,8 @@ nixos/tests/zfs.nix @raitobezarius
# Linux Kernel
pkgs/os-specific/linux/kernel/manual-config.nix @amjoseph-nixpkgs
# Buildbot
nixos/modules/services/continuous-integration/buildbot @Mic92 @zowoq
nixos/tests/buildbot.nix @Mic92 @zowoq
pkgs/development/tools/continuous-integration/buildbot @Mic92 @zowoq

View File

@ -106,6 +106,19 @@ The following are supported:
- [`note`](https://tdg.docbook.org/tdg/5.0/note.html)
- [`tip`](https://tdg.docbook.org/tdg/5.0/tip.html)
- [`warning`](https://tdg.docbook.org/tdg/5.0/warning.html)
- [`example`](https://tdg.docbook.org/tdg/5.0/example.html)
Example admonitions require a title to work.
If you don't provide one, the manual won't be built.
```markdown
::: {.example #ex-showing-an-example}
# Title for this example
Text for the example.
:::
```
#### [Definition lists](https://github.com/jgm/commonmark-hs/blob/master/commonmark-extensions/test/definition_lists.md)
@ -139,3 +152,54 @@ watermelon
Closes #216321.
- If the commit contains more than just documentation changes, follow the commit message format relevant for the rest of the changes.
## Documentation conventions
In an effort to keep the Nixpkgs manual in a consistent style, please follow the conventions below, unless they prevent you from properly documenting something.
In that case, please open an issue about the particular documentation convention and tag it with a "needs: documentation" label.
- Put each sentence in its own line.
This makes reviewing documentation much easier, since GitHub's review system is based on lines.
- Use the admonitions syntax for any callouts and examples (see [section above](#admonitions)).
- If you provide an example involving Nix code, make your example into a fully-working package (something that can be passed to `pkgs.callPackage`).
This will help others quickly test that the example works, and will also make it easier if we start automatically testing all example code to make sure it works.
For example, instead of providing something like:
```
pkgs.dockerTools.buildLayeredImage {
name = "hello";
contents = [ pkgs.hello ];
}
```
Provide something like:
```
{ dockerTools, hello }:
dockerTools.buildLayeredImage {
name = "hello";
contents = [ hello ];
}
```
- Use [definition lists](#definition-lists) to document function arguments, and the attributes of such arguments. For example:
```markdown
# pkgs.coolFunction
Description of what `coolFunction` does.
`coolFunction` expects a single argument which should be an attribute set, with the following possible attributes:
`name`
: The name of the resulting image.
`tag` _optional_
: Tag of the generated image.
_Default value:_ the output path's hash.
```

View File

@ -1,49 +1,58 @@
# pkgs.mkBinaryCache {#sec-pkgs-binary-cache}
`pkgs.mkBinaryCache` is a function for creating Nix flat-file binary caches. Such a cache exists as a directory on disk, and can be used as a Nix substituter by passing `--substituter file:///path/to/cache` to Nix commands.
`pkgs.mkBinaryCache` is a function for creating Nix flat-file binary caches.
Such a cache exists as a directory on disk, and can be used as a Nix substituter by passing `--substituter file:///path/to/cache` to Nix commands.
Nix packages are most commonly shared between machines using [HTTP, SSH, or S3](https://nixos.org/manual/nix/stable/package-management/sharing-packages.html), but a flat-file binary cache can still be useful in some situations. For example, you can copy it directly to another machine, or make it available on a network file system. It can also be a convenient way to make some Nix packages available inside a container via bind-mounting.
Nix packages are most commonly shared between machines using [HTTP, SSH, or S3](https://nixos.org/manual/nix/stable/package-management/sharing-packages.html), but a flat-file binary cache can still be useful in some situations.
For example, you can copy it directly to another machine, or make it available on a network file system.
It can also be a convenient way to make some Nix packages available inside a container via bind-mounting.
Note that this function is meant for advanced use-cases. The more idiomatic way to work with flat-file binary caches is via the [nix-copy-closure](https://nixos.org/manual/nix/stable/command-ref/nix-copy-closure.html) command. You may also want to consider [dockerTools](#sec-pkgs-dockerTools) for your containerization needs.
`mkBinaryCache` expects an argument with the `rootPaths` attribute.
`rootPaths` must be a list of derivations.
The transitive closure of these derivations' outputs will be copied into the cache.
## Example {#sec-pkgs-binary-cache-example}
::: {.note}
This function is meant for advanced use cases.
The more idiomatic way to work with flat-file binary caches is via the [nix-copy-closure](https://nixos.org/manual/nix/stable/command-ref/nix-copy-closure.html) command.
You may also want to consider [dockerTools](#sec-pkgs-dockerTools) for your containerization needs.
:::
[]{#sec-pkgs-binary-cache-example}
:::{.example #ex-mkbinarycache-copying-package-closure}
# Copying a package and its closure to another machine with `mkBinaryCache`
The following derivation will construct a flat-file binary cache containing the closure of `hello`.
```nix
{ mkBinaryCache, hello }:
mkBinaryCache {
rootPaths = [hello];
}
```
- `rootPaths` specifies a list of root derivations. The transitive closure of these derivations' outputs will be copied into the cache.
Here's an example of building and using the cache.
Build the cache on one machine, `host1`:
Build the cache on a machine.
Note that the command still builds the exact nix package above, but adds some boilerplate to build it directly from an expression.
```shellSession
nix-build -E 'with import <nixpkgs> {}; mkBinaryCache { rootPaths = [hello]; }'
$ nix-build -E 'let pkgs = import <nixpkgs> {}; in pkgs.callPackage ({ mkBinaryCache, hello }: mkBinaryCache { rootPaths = [hello]; }) {}'
/nix/store/azf7xay5xxdnia4h9fyjiv59wsjdxl0g-binary-cache
```
Copy the resulting directory to another machine, which we'll call `host2`:
```shellSession
/nix/store/cc0562q828rnjqjyfj23d5q162gb424g-binary-cache
$ scp result host2:/tmp/hello-cache
```
Copy the resulting directory to the other machine, `host2`:
At this point, the cache can be used as a substituter when building derivations on `host2`:
```shellSession
scp result host2:/tmp/hello-cache
```
Substitute the derivation using the flat-file binary cache on the other machine, `host2`:
```shellSession
nix-build -A hello '<nixpkgs>' \
$ nix-build -A hello '<nixpkgs>' \
--option require-sigs false \
--option trusted-substituters file:///tmp/hello-cache \
--option substituters file:///tmp/hello-cache
/nix/store/zhl06z4lrfrkw5rp0hnjjfrgsclzvxpm-hello-2.12.1
```
```shellSession
/nix/store/gl5a41azbpsadfkfmbilh9yk40dh5dl0-hello-2.12.1
```
:::

View File

@ -4,22 +4,21 @@
The function `buildDartApplication` builds Dart applications managed with pub.
It fetches its Dart dependencies automatically through `fetchDartDeps`, and (through a series of hooks) builds and installs the executables specified in the pubspec file. The hooks can be used in other derivations, if needed. The phases can also be overridden to do something different from installing binaries.
It fetches its Dart dependencies automatically through `pub2nix`, and (through a series of hooks) builds and installs the executables specified in the pubspec file. The hooks can be used in other derivations, if needed. The phases can also be overridden to do something different from installing binaries.
If you are packaging a Flutter desktop application, use [`buildFlutterApplication`](#ssec-dart-flutter) instead.
`vendorHash`: is the hash of the output of the dependency fetcher derivation. To obtain it, set it to `lib.fakeHash` (or omit it) and run the build ([more details here](#sec-source-hashes)).
`pubspecLock` is the parsed pubspec.lock file. pub2nix uses this to download required packages.
This can be converted to JSON from YAML with something like `yq . pubspec.lock`, and then read by Nix.
If the upstream source is missing a `pubspec.lock` file, you'll have to vendor one and specify it using `pubspecLockFile`. If it is needed, one will be generated for you and printed when attempting to build the derivation.
The `depsListFile` must always be provided when packaging in Nixpkgs. It will be generated and printed if the derivation is attempted to be built without one. Alternatively, `autoDepsList` may be set to `true` only when outside of Nixpkgs, as it relies on import-from-derivation.
If the package has Git package dependencies, the hashes must be provided in the `gitHashes` set. If a hash is missing, an error message prompting you to add it will be shown.
The `dart` commands run can be overridden through `pubGetScript` and `dartCompileCommand`, you can also add flags using `dartCompileFlags` or `dartJitFlags`.
Dart supports multiple [outputs types](https://dart.dev/tools/dart-compile#types-of-output), you can choose between them using `dartOutputType` (defaults to `exe`). If you want to override the binaries path or the source path they come from, you can use `dartEntryPoints`. Outputs that require a runtime will automatically be wrapped with the relevant runtime (`dartaotruntime` for `aot-snapshot`, `dart run` for `jit-snapshot` and `kernel`, `node` for `js`), this can be overridden through `dartRuntimeCommand`.
```nix
{ buildDartApplication, fetchFromGitHub }:
{ lib, buildDartApplication, fetchFromGitHub }:
buildDartApplication rec {
pname = "dart-sass";
@ -32,12 +31,53 @@ buildDartApplication rec {
hash = "sha256-U6enz8yJcc4Wf8m54eYIAnVg/jsGi247Wy8lp1r1wg4=";
};
pubspecLockFile = ./pubspec.lock;
depsListFile = ./deps.json;
vendorHash = "sha256-Atm7zfnDambN/BmmUf4BG0yUz/y6xWzf0reDw3Ad41s=";
pubspecLock = lib.importJSON ./pubspec.lock.json;
}
```
### Patching dependencies {#ssec-dart-applications-patching-dependencies}
Some Dart packages require patches or build environment changes. Package derivations can be customised with the `customSourceBuilders` argument.
A collection of such customisations can be found in Nixpkgs, in the `development/compilers/dart/package-source-builders` directory.
This allows fixes for packages to be shared between all applications that use them. It is strongly recommended to add to this collection instead of including fixes in your application derivation itself.
### Running executables from dev_dependencies {#ssec-dart-applications-build-tools}
Many Dart applications require executables from the `dev_dependencies` section in `pubspec.yaml` to be run before building them.
This can be done in `preBuild`, in one of two ways:
1. Packaging the tool with `buildDartApplication`, adding it to Nixpkgs, and running it like any other application
2. Running the tool from the package cache
Of these methods, the first is recommended when using a tool that does not need
to be of a specific version.
For the second method, the `packageRun` function from the `dartConfigHook` can be used.
This is an alternative to `dart run` that does not rely on Pub.
e.g., for `build_runner`:
```bash
packageRun build_runner build
```
Do _not_ use `dart run <package_name>`, as this will attempt to download dependencies with Pub.
### Usage with nix-shell {#ssec-dart-applications-nix-shell}
As `buildDartApplication` provides dependencies instead of `pub get`, Dart needs to be explicitly told where to find them.
Run the following commands in the source directory to configure Dart appropriately.
Do not use `pub` after doing so; it will download the dependencies itself and overwrite these changes.
```bash
cp --no-preserve=all "$pubspecLockFilePath" pubspec.lock
mkdir -p .dart_tool && cp --no-preserve=all "$packageConfig" .dart_tool/package_config.json
```
## Flutter applications {#ssec-dart-flutter}
The function `buildFlutterApplication` builds Flutter applications.
@ -59,8 +99,10 @@ flutter.buildFlutterApplication {
fetchSubmodules = true;
};
pubspecLockFile = ./pubspec.lock;
depsListFile = ./deps.json;
vendorHash = "sha256-cdMO+tr6kYiN5xKXa+uTMAcFf2C75F3wVPrn21G4QPQ=";
pubspecLock = lib.importJSON ./pubspec.lock.json;
}
### Usage with nix-shell {#ssec-dart-flutter-nix-shell}
See the [Dart documentation](#ssec-dart-applications-nix-shell) nix-shell instructions.
```

View File

@ -299,14 +299,13 @@ python3Packages.buildPythonApplication rec {
hash = "sha256-Pe229rT0aHwA98s+nTHQMEFKZPo/yw6sot8MivFDvAw=";
};
nativeBuildInputs = [
python3Packages.setuptools
python3Packages.wheel
nativeBuildInputs = with python3Packages; [
setuptools
];
propagatedBuildInputs = [
python3Packages.tornado
python3Packages.python-daemon
propagatedBuildInputs = with python3Packages; [
tornado
python-daemon
];
meta = with lib; {
@ -2061,7 +2060,7 @@ and create update commits, and supports the `fetchPypi`, `fetchurl` and
hosted on GitHub, exporting a `GITHUB_API_TOKEN` is highly recommended.
Updating packages in bulk leads to lots of breakages, which is why a
stabilization period on the `python-unstable` branch is required.
stabilization period on the `python-updates` branch is required.
If a package is fragile and often breaks during these bulks updates, it
may be reasonable to set `passthru.skipBulkUpdate = true` in the

View File

@ -2,13 +2,13 @@
## Using Ruby {#using-ruby}
Several versions of Ruby interpreters are available on Nix, as well as over 250 gems and many applications written in Ruby. The attribute `ruby` refers to the default Ruby interpreter, which is currently MRI 2.6. It's also possible to refer to specific versions, e.g. `ruby_2_y`, `jruby`, or `mruby`.
Several versions of Ruby interpreters are available on Nix, as well as over 250 gems and many applications written in Ruby. The attribute `ruby` refers to the default Ruby interpreter, which is currently MRI 3.1. It's also possible to refer to specific versions, e.g. `ruby_3_y`, `jruby`, or `mruby`.
In the Nixpkgs tree, Ruby packages can be found throughout, depending on what they do, and are called from the main package set. Ruby gems, however are separate sets, and there's one default set for each interpreter (currently MRI only).
There are two main approaches for using Ruby with gems. One is to use a specifically locked `Gemfile` for an application that has very strict dependencies. The other is to depend on the common gems, which we'll explain further down, and rely on them being updated regularly.
The interpreters have common attributes, namely `gems`, and `withPackages`. So you can refer to `ruby.gems.nokogiri`, or `ruby_2_7.gems.nokogiri` to get the Nokogiri gem already compiled and ready to use.
The interpreters have common attributes, namely `gems`, and `withPackages`. So you can refer to `ruby.gems.nokogiri`, or `ruby_3_2.gems.nokogiri` to get the Nokogiri gem already compiled and ready to use.
Since not all gems have executables like `nokogiri`, it's usually more convenient to use the `withPackages` function like this: `ruby.withPackages (p: with p; [ nokogiri ])`. This will also make sure that the Ruby in your environment will be able to find the gem and it can be used in your Ruby code (for example via `ruby` or `irb` executables) via `require "nokogiri"` as usual.
@ -33,7 +33,7 @@ Again, it's possible to launch the interpreter from the shell. The Ruby interpre
#### Load Ruby environment from `.nix` expression {#load-ruby-environment-from-.nix-expression}
As explained [in the `nix-shell` section](https://nixos.org/manual/nix/stable/command-ref/nix-shell) of the Nix manual, `nix-shell` can also load an expression from a `.nix` file.
Say we want to have Ruby 2.6, `nokogori`, and `pry`. Consider a `shell.nix` file with:
Say we want to have Ruby, `nokogori`, and `pry`. Consider a `shell.nix` file with:
```nix
with import <nixpkgs> {};
@ -114,7 +114,7 @@ With this file in your directory, you can run `nix-shell` to build and use the g
The `bundlerEnv` is a wrapper over all the gems in your gemset. This means that all the `/lib` and `/bin` directories will be available, and the executables of all gems (even of indirect dependencies) will end up in your `$PATH`. The `wrappedRuby` provides you with all executables that come with Ruby itself, but wrapped so they can easily find the gems in your gemset.
One common issue that you might have is that you have Ruby 2.6, but also `bundler` in your gemset. That leads to a conflict for `/bin/bundle` and `/bin/bundler`. You can resolve this by wrapping either your Ruby or your gems in a `lowPrio` call. So in order to give the `bundler` from your gemset priority, it would be used like this:
One common issue that you might have is that you have Ruby, but also `bundler` in your gemset. That leads to a conflict for `/bin/bundle` and `/bin/bundler`. You can resolve this by wrapping either your Ruby or your gems in a `lowPrio` call. So in order to give the `bundler` from your gemset priority, it would be used like this:
```nix
# ...

View File

@ -21,16 +21,38 @@
nixosSystem = args:
import ./nixos/lib/eval-config.nix (
args // { inherit (self) lib; } // lib.optionalAttrs (! args?system) {
{
lib = final;
# Allow system to be set modularly in nixpkgs.system.
# We set it to null, to remove the "legacy" entrypoint's
# non-hermetic default.
system = null;
}
} // args
);
});
checks.x86_64-linux.tarball = jobs.tarball;
checks.x86_64-linux = {
tarball = jobs.tarball;
# Test that ensures that the nixosSystem function can accept a lib argument
# Note: prefer not to extend or modify `lib`, especially if you want to share reusable modules
# alternatives include: `import` a file, or put a custom library in an option or in `_module.args.<libname>`
nixosSystemAcceptsLib = (self.lib.nixosSystem {
lib = self.lib.extend (final: prev: {
ifThisFunctionIsMissingTheTestFails = final.id;
});
modules = [
./nixos/modules/profiles/minimal.nix
({ lib, ... }: lib.ifThisFunctionIsMissingTheTestFails {
# Define a minimal config without eval warnings
nixpkgs.hostPlatform = "x86_64-linux";
boot.loader.grub.enable = false;
fileSystems."/".device = "nodev";
# See https://search.nixos.org/options?show=system.stateVersion&query=stateversion
system.stateVersion = lib.versions.majorMinor lib.version; # DON'T do this in real configs!
})
];
}).config.system.build.toplevel;
};
htmlDocs = {
nixpkgsManual = jobs.manual;

View File

@ -1959,6 +1959,18 @@ runTests {
expr = (with types; int).description;
expected = "signed integer";
};
testTypeDescriptionIntsPositive = {
expr = (with types; ints.positive).description;
expected = "positive integer, meaning >0";
};
testTypeDescriptionIntsPositiveOrEnumAuto = {
expr = (with types; either ints.positive (enum ["auto"])).description;
expected = ''positive integer, meaning >0, or value "auto" (singular enum)'';
};
testTypeDescriptionListOfPositive = {
expr = (with types; listOf ints.positive).description;
expected = "list of (positive integer, meaning >0)";
};
testTypeDescriptionListOfInt = {
expr = (with types; listOf int).description;
expected = "list of signed integer";

View File

@ -113,9 +113,14 @@ rec {
, # Description of the type, defined recursively by embedding the wrapped type if any.
description ? null
# A hint for whether or not this description needs parentheses. Possible values:
# - "noun": a simple noun phrase such as "positive integer"
# - "conjunction": a phrase with a potentially ambiguous "or" connective.
# - "noun": a noun phrase
# Example description: "positive integer",
# - "conjunction": a phrase with a potentially ambiguous "or" connective
# Example description: "int or string"
# - "composite": a phrase with an "of" connective
# Example description: "list of string"
# - "nonRestrictiveClause": a noun followed by a comma and a clause
# Example description: "positive integer, meaning >0"
# See the `optionDescriptionPhrase` function.
, descriptionClass ? null
, # DO NOT USE WITHOUT KNOWING WHAT YOU ARE DOING!
@ -338,10 +343,12 @@ rec {
unsigned = addCheck types.int (x: x >= 0) // {
name = "unsignedInt";
description = "unsigned integer, meaning >=0";
descriptionClass = "nonRestrictiveClause";
};
positive = addCheck types.int (x: x > 0) // {
name = "positiveInt";
description = "positive integer, meaning >0";
descriptionClass = "nonRestrictiveClause";
};
u8 = unsign 8 256;
u16 = unsign 16 65536;
@ -383,10 +390,12 @@ rec {
nonnegative = addCheck number (x: x >= 0) // {
name = "numberNonnegative";
description = "nonnegative integer or floating point number, meaning >=0";
descriptionClass = "nonRestrictiveClause";
};
positive = addCheck number (x: x > 0) // {
name = "numberPositive";
description = "positive integer or floating point number, meaning >0";
descriptionClass = "nonRestrictiveClause";
};
};
@ -463,6 +472,7 @@ rec {
passwdEntry = entryType: addCheck entryType (str: !(hasInfix ":" str || hasInfix "\n" str)) // {
name = "passwdEntry ${entryType.name}";
description = "${optionDescriptionPhrase (class: class == "noun") entryType}, not containing newlines or colons";
descriptionClass = "nonRestrictiveClause";
};
attrs = mkOptionType {
@ -870,7 +880,13 @@ rec {
# Either value of type `t1` or `t2`.
either = t1: t2: mkOptionType rec {
name = "either";
description = "${optionDescriptionPhrase (class: class == "noun" || class == "conjunction") t1} or ${optionDescriptionPhrase (class: class == "noun" || class == "conjunction" || class == "composite") t2}";
description =
if t1.descriptionClass or null == "nonRestrictiveClause"
then
# Plain, but add comma
"${t1.description}, or ${optionDescriptionPhrase (class: class == "noun" || class == "conjunction") t2}"
else
"${optionDescriptionPhrase (class: class == "noun" || class == "conjunction") t1} or ${optionDescriptionPhrase (class: class == "noun" || class == "conjunction" || class == "composite") t2}";
descriptionClass = "conjunction";
check = x: t1.check x || t2.check x;
merge = loc: defs:

View File

@ -534,7 +534,7 @@
name = "James Alexander Feldman-Crough";
};
afontain = {
email = "antoine.fontaine@epfl.ch";
email = "afontain@posteo.net";
github = "necessarily-equal";
githubId = 59283660;
name = "Antoine Fontaine";
@ -2063,6 +2063,12 @@
githubId = 80325;
name = "Benjamin Andresen";
};
barab-i = {
email = "barab_i@outlook.com";
github = "barab-i";
githubId = 92919899;
name = "Barab I";
};
baracoder = {
email = "baracoder@googlemail.com";
github = "baracoder";
@ -3968,6 +3974,12 @@
githubId = 217899;
name = "Cyryl Płotnicki";
};
d3vil0p3r = {
name = "Antonio Voza";
email = "vozaanthony@gmail.com";
github = "D3vil0p3r";
githubId = 83867734;
};
dadada = {
name = "dadada";
email = "dadada@dadada.li";
@ -4676,6 +4688,15 @@
fingerprint = "8FD2 153F 4889 541A 54F1 E09E 71B6 C31C 8A5A 9D21";
}];
};
dixslyf = {
name = "Dixon Sean Low Yan Feng";
email = "dixonseanlow@protonmail.com";
github = "dixslyf";
githubId = 56017218;
keys = [{
fingerprint = "E6F4 BFB4 8DE3 893F 68FC A15F FF5F 4B30 A41B BAC8";
}];
};
djacu = {
email = "daniel.n.baker@gmail.com";
github = "djacu";
@ -6685,7 +6706,7 @@
};
getpsyched = {
name = "Priyanshu Tripathi";
email = "priyanshutr@proton.me";
email = "priyanshu@getpsyched.dev";
matrix = "@getpsyched:matrix.org";
github = "getpsyched";
githubId = 43472218;
@ -7504,6 +7525,16 @@
githubId = 362833;
name = "Hongchang Wu";
};
honnip = {
name = "Jung seungwoo";
email = "me@honnip.page";
matrix = "@honnip:matrix.org";
github = "honnip";
githubId = 108175486;
keys = [{
fingerprint = "E4DD 51F7 FA3F DCF1 BAF6 A72C 576E 43EF 8482 E415";
}];
};
hoppla20 = {
email = "privat@vincentcui.de";
github = "hoppla20";
@ -7608,6 +7639,12 @@
githubId = 51334444;
name = "Akshat Agarwal";
};
hummeltech = {
email = "hummeltech2024@gmail.com";
github = "hummeltech";
githubId = 6109326;
name = "David Hummel";
};
huyngo = {
email = "huyngo@disroot.org";
github = "Huy-Ngo";
@ -8242,6 +8279,12 @@
github = "Janik-Haag";
githubId = 80165193;
};
jankaifer = {
name = "Jan Kaifer";
email = "jan@kaifer.cz";
github = "jankaifer";
githubId = 12820484;
};
jansol = {
email = "jan.solanti@paivola.fi";
github = "jansol";
@ -9265,6 +9308,12 @@
githubId = 5124422;
name = "Julien Urraca";
};
justanotherariel = {
email = "ariel@ebersberger.io";
github = "justanotherariel";
githubId = 31776703;
name = "Ariel Ebersberger";
};
justinas = {
email = "justinas@justinas.org";
github = "justinas";
@ -10956,6 +11005,12 @@
githubId = 2486026;
name = "Luca Fulchir";
};
luleyleo = {
email = "git@leopoldluley.de";
github = "luleyleo";
githubId = 10746692;
name = "Leopold Luley";
};
lumi = {
email = "lumi@pew.im";
github = "lumi-me-not";
@ -12975,6 +13030,12 @@
githubId = 77314501;
name = "Maurice Zhou";
};
Nebucatnetzer = {
email = "andreas+nixpkgs@zweili.ch";
github = "Nebucatnetzer";
githubId = 2287221;
name = "Andreas Zweili";
};
Necior = {
email = "adrian@sadlocha.eu";
github = "Necior";
@ -13246,6 +13307,13 @@
githubId = 6391776;
name = "Nikita Voloboev";
};
niklaskorz = {
name = "Niklas Korz";
email = "niklas@niklaskorz.de";
matrix = "@niklaskorz:korz.dev";
github = "niklaskorz";
githubId = 590517;
};
NikolaMandic = {
email = "nikola@mandic.email";
github = "NikolaMandic";
@ -13551,6 +13619,12 @@
githubId = 1839979;
name = "Niklas Thörne";
};
nudelsalat = {
email = "nudelsalat@clouz.de";
name = "Fabian Dreßler";
github = "Noodlesalat";
githubId = 12748782;
};
nukaduka = {
email = "ksgokte@gmail.com";
github = "NukaDuka";
@ -13842,10 +13916,10 @@
name = "Sandro Stikić";
};
OPNA2608 = {
email = "christoph.neidahl@gmail.com";
email = "opna2608@protonmail.com";
github = "OPNA2608";
githubId = 23431373;
name = "Christoph Neidahl";
name = "Cosima Neidahl";
};
orbekk = {
email = "kjetil.orbekk@gmail.com";
@ -14583,15 +14657,6 @@
fingerprint = "B00F E582 FD3F 0732 EA48 3937 F558 14E4 D687 4375";
}];
};
PlayerNameHere = {
name = "Dixon Sean Low Yan Feng";
email = "dixonseanlow@protonmail.com";
github = "dixslyf";
githubId = 56017218;
keys = [{
fingerprint = "E6F4 BFB4 8DE3 893F 68FC A15F FF5F 4B30 A41B BAC8";
}];
};
plchldr = {
email = "mail@oddco.de";
github = "plchldr";
@ -16749,6 +16814,12 @@
}];
name = "Shane Sveller";
};
shard7 = {
email = "sh7user@gmail.com";
github = "shard77";
githubId = 106669955;
name = "Léon Gessner";
};
shardy = {
email = "shardul@baral.ca";
github = "shardulbee";
@ -16965,6 +17036,11 @@
githubId = 50401154;
name = "Simone Ruffini";
};
simonhammes = {
github = "simonhammes";
githubId = 10352679;
name = "Simon Hammes";
};
simonkampe = {
email = "simon.kampe+nix@gmail.com";
github = "simonkampe";
@ -17160,6 +17236,12 @@
fingerprint = "897E 6BE3 0345 B43D CADD 05B7 290F CF08 1AED B3EC";
}];
};
smrehman = {
name = "Syed Moiz Ur Rehman";
email = "smrehman@proton.me";
github = "syedmoizurrehman";
githubId = 17818950;
};
sna = {
email = "abouzahra.9@wright.edu";
github = "S-NA";
@ -17280,6 +17362,13 @@
githubId = 151924;
name = "John Anderson";
};
soopyc = {
name = "Cassie Cheung";
email = "me@soopy.moe";
github = "soopyc";
githubId = 13762043;
matrix = "@sophie:nue.soopy.moe";
};
sophrosyne = {
email = "joshuaortiz@tutanota.com";
github = "sophrosyne97";
@ -18988,6 +19077,13 @@
matrix = "@ty:tjll.net";
name = "Tyler Langlois";
};
tylervick = {
email = "nix@tylervick.com";
github = "tylervick";
githubId = 1395852;
name = "Tyler Vick";
matrix = "@tylervick:matrix.org";
};
tymscar = {
email = "oscar@tymscar.com";
github = "tymscar";
@ -19008,7 +19104,7 @@
};
uakci = {
name = "uakci";
email = "uakci@uakci.pl";
email = "git@uakci.space";
github = "uakci";
githubId = 6961268;
};
@ -20173,7 +20269,7 @@
};
yana = {
email = "yana@riseup.net";
github = "yanalunaterra";
github = "yanateras";
githubId = 1643293;
name = "Yana Timoshenko";
};
@ -20702,6 +20798,12 @@
githubId = 8100652;
name = "David Mell";
};
zshipko = {
email = "zachshipko@gmail.com";
github = "zshipko";
githubId = 332534;
name = "Zach Shipko";
};
ztzg = {
email = "dd@crosstwine.com";
github = "ztzg";

View File

@ -54,8 +54,8 @@ if ! gh auth status 2>/dev/null ; then
fi
# Make sure this is configured before we start doing anything
push_remote="$(git config branch.haskell-updates.pushRemote \
|| die 'Can'\''t determine pushRemote for haskell-updates. Please set using `git config branch.haskell-updates.pushremote <remote name>`.')"
push_remote="$(git config branch.haskell-updates.pushRemote)" \
|| die 'Can'\''t determine pushRemote for haskell-updates. Please set using `git config branch.haskell-updates.pushremote <remote name>`.'
# Fetch nixpkgs to get an up-to-date origin/haskell-updates branch.
echo "Fetching origin..."

View File

@ -17,6 +17,7 @@ import http
import json
import logging
import os
import re
import subprocess
import sys
import time
@ -192,6 +193,11 @@ class RepoGitHub(Repo):
with urllib.request.urlopen(commit_req, timeout=10) as req:
self._check_for_redirect(commit_url, req)
xml = req.read()
# Filter out illegal XML characters
illegal_xml_regex = re.compile(b"[\x00-\x08\x0B-\x0C\x0E-\x1F\x7F]")
xml = illegal_xml_regex.sub(b"", xml)
root = ET.fromstring(xml)
latest_entry = root.find(ATOM_ENTRY)
assert latest_entry is not None, f"No commits found in repository {self}"

View File

@ -96,6 +96,16 @@ with lib.maintainers; {
shortName = "Blockchains";
};
buildbot = {
members = [
lopsided98
mic92
zowoq
];
scope = "Maintain Buildbot CI framework";
shortName = "Buildbot";
};
c = {
members = [
matthewbauer
@ -524,7 +534,6 @@ with lib.maintainers; {
dtzWill
ericson2314
lovek323
primeos
qyliss
raitobezarius
rrbutani

View File

@ -28,9 +28,13 @@ In addition to numerous new and upgraded packages, this release has the followin
- [rspamd-trainer](https://gitlab.com/onlime/rspamd-trainer), script triggered by a helper which reads mails from a specific mail inbox and feeds them into rspamd for spam/ham training.
- [ollama](https://ollama.ai), server for running large language models locally.
- [Anki Sync Server](https://docs.ankiweb.net/sync-server.html), the official sync server built into recent versions of Anki. Available as [services.anki-sync-server](#opt-services.anki-sync-server.enable).
The pre-existing [services.ankisyncd](#opt-services.ankisyncd.enable) has been marked deprecated and will be dropped after 24.05 due to lack of maintenance of the anki-sync-server softwares.
- [ping_exporter](https://github.com/czerwonk/ping_exporter), a Prometheus exporter for ICMP echo requests. Available as [services.prometheus.exporters.ping](#opt-services.prometheus.exporters.ping.enable).
- [Clevis](https://github.com/latchset/clevis), a pluggable framework for automated decryption, used to unlock encrypted devices in initrd. Available as [boot.initrd.clevis.enable](#opt-boot.initrd.clevis.enable).
- [TuxClocker](https://github.com/Lurkki14/tuxclocker), a hardware control and monitoring program. Available as [programs.tuxclocker](#opt-programs.tuxclocker.enable).
@ -52,6 +56,8 @@ The pre-existing [services.ankisyncd](#opt-services.ankisyncd.enable) has been m
- Invidious has changed its default database username from `kemal` to `invidious`. Setups involving an externally provisioned database (i.e. `services.invidious.database.createLocally == false`) should adjust their configuration accordingly. The old `kemal` user will not be removed automatically even when the database is provisioned automatically.(https://github.com/NixOS/nixpkgs/pull/265857)
- `paperless`' `services.paperless.extraConfig` setting has been removed and converted to the freeform type and option named `services.paperless.settings`.
- `mkosi` was updated to v19. Parts of the user interface have changed. Consult the
[release notes](https://github.com/systemd/mkosi/releases/tag/v19) for a list of changes.
@ -74,6 +80,19 @@ The pre-existing [services.ankisyncd](#opt-services.ankisyncd.enable) has been m
`CONFIG_FILE_NAME` includes `bpf_pinning`, `ematch_map`, `group`, `nl_protos`, `rt_dsfield`, `rt_protos`, `rt_realms`, `rt_scopes`, and `rt_tables`.
- The executable file names for `firefox-devedition`, `firefox-beta`, `firefox-esr` now matches their package names, which is consistent with the `firefox-*-bin` packages. The desktop entries are also updated so that you can have multiple editions of firefox in your app launcher.
- The `systemd.oomd` module behavior is changed as:
- Raise ManagedOOMMemoryPressureLimit from 50% to 80%. This should make systemd-oomd kill things less often, and fix issues like [this](https://pagure.io/fedora-workstation/issue/358).
Reference: [commit](https://src.fedoraproject.org/rpms/systemd/c/806c95e1c70af18f81d499b24cd7acfa4c36ffd6?branch=806c95e1c70af18f81d499b24cd7acfa4c36ffd6)
- Remove swap policy. This helps prevent killing processes when user's swap is small.
- Expand the memory pressure policy to system.slice, user-.slice, and all user owned slices. Reference: [commit](https://src.fedoraproject.org/rpms/systemd/c/7665e1796f915dedbf8e014f0a78f4f576d609bb)
- `systemd.oomd.enableUserServices` is renamed to `systemd.oomd.enableUserSlices`.
## Other Notable Changes {#sec-release-24.05-notable-changes}
<!-- To avoid merge conflicts, consider adding your item at an arbitrary place in the list instead. -->
@ -89,8 +108,23 @@ The pre-existing [services.ankisyncd](#opt-services.ankisyncd.enable) has been m
The `nimPackages` and `nim2Packages` sets have been removed.
See https://nixos.org/manual/nixpkgs/unstable#nim for more information.
- [Portunus](https://github.com/majewsky/portunus) has been updated to major version 2.
This version of Portunus supports strong password hashes, but the legacy hash SHA-256 is also still supported to ensure a smooth migration of existing user accounts.
After upgrading, follow the instructions on the [upstream release notes](https://github.com/majewsky/portunus/releases/tag/v2.0.0) to upgrade all user accounts to strong password hashes.
Support for weak password hashes will be removed in NixOS 24.11.
- `libass` now uses the native CoreText backend on Darwin, which may fix subtitle rendering issues with `mpv`, `ffmpeg`, etc.
- The following options of the Nextcloud module were moved into [`services.nextcloud.extraOptions`](#opt-services.nextcloud.extraOptions) and renamed to match the name from Nextcloud's `config.php`:
- `logLevel` -> [`loglevel`](#opt-services.nextcloud.extraOptions.loglevel),
- `logType` -> [`log_type`](#opt-services.nextcloud.extraOptions.log_type),
- `defaultPhoneRegion` -> [`default_phone_region`](#opt-services.nextcloud.extraOptions.default_phone_region),
- `overwriteProtocol` -> [`overwriteprotocol`](#opt-services.nextcloud.extraOptions.overwriteprotocol),
- `skeletonDirectory` -> [`skeletondirectory`](#opt-services.nextcloud.extraOptions.skeletondirectory),
- `globalProfiles` -> [`profile.enabled`](#opt-services.nextcloud.extraOptions._profile.enabled_),
- `extraTrustedDomains` -> [`trusted_domains`](#opt-services.nextcloud.extraOptions.trusted_domains) and
- `trustedProxies` -> [`trusted_proxies`](#opt-services.nextcloud.extraOptions.trusted_proxies).
- The Yama LSM is now enabled by default in the kernel, which prevents ptracing
non-child processes. This means you will not be able to attach gdb to an
existing process, but will need to start that process from gdb (so it is a
@ -100,6 +134,8 @@ The pre-existing [services.ankisyncd](#opt-services.ankisyncd.enable) has been m
`globalRedirect` can now have redirect codes other than 301 through
`redirectCode`.
- The source of the `mockgen` package has changed to the [go.uber.org/mock](https://github.com/uber-go/mock) fork because [the original repository is no longer maintained](https://github.com/golang/mock#gomock).
- [](#opt-boot.kernel.sysctl._net.core.wmem_max_) changed from a string to an integer because of the addition of a custom merge option (taking the highest value defined to avoid conflicts between 2 services trying to set that value), just as [](#opt-boot.kernel.sysctl._net.core.rmem_max_) since 22.11.
- `services.zfs.zed.enableMail` now uses the global `sendmail` wrapper defined by an email module

View File

@ -226,18 +226,6 @@ in
"ldap.conf" = ldapConfig;
};
system.activationScripts = mkIf (!cfg.daemon.enable) {
ldap = stringAfter [ "etc" "groups" "users" ] ''
if test -f "${cfg.bind.passwordFile}" ; then
umask 0077
conf="$(mktemp)"
printf 'bindpw %s\n' "$(cat ${cfg.bind.passwordFile})" |
cat ${ldapConfig.source} - >"$conf"
mv -fT "$conf" /etc/ldap.conf
fi
'';
};
system.nssModules = mkIf cfg.nsswitch (singleton (
if cfg.daemon.enable then nss_pam_ldapd else nss_ldap
));
@ -258,42 +246,63 @@ in
};
};
systemd.services = mkIf cfg.daemon.enable {
nslcd = {
wantedBy = [ "multi-user.target" ];
preStart = ''
umask 0077
conf="$(mktemp)"
{
cat ${nslcdConfig}
test -z '${cfg.bind.distinguishedName}' -o ! -f '${cfg.bind.passwordFile}' ||
printf 'bindpw %s\n' "$(cat '${cfg.bind.passwordFile}')"
test -z '${cfg.daemon.rootpwmoddn}' -o ! -f '${cfg.daemon.rootpwmodpwFile}' ||
printf 'rootpwmodpw %s\n' "$(cat '${cfg.daemon.rootpwmodpwFile}')"
} >"$conf"
mv -fT "$conf" /run/nslcd/nslcd.conf
'';
restartTriggers = [
nslcdConfig
cfg.bind.passwordFile
cfg.daemon.rootpwmodpwFile
];
serviceConfig = {
ExecStart = "${nslcdWrapped}/bin/nslcd";
Type = "forking";
Restart = "always";
User = "nslcd";
Group = "nslcd";
RuntimeDirectory = [ "nslcd" ];
PIDFile = "/run/nslcd/nslcd.pid";
AmbientCapabilities = "CAP_SYS_RESOURCE";
systemd.services = mkMerge [
(mkIf (!cfg.daemon.enable) {
ldap-password = {
wantedBy = [ "sysinit.target" ];
before = [ "sysinit.target" "shutdown.target" ];
conflicts = [ "shutdown.target" ];
unitConfig.DefaultDependencies = false;
serviceConfig.Type = "oneshot";
serviceConfig.RemainAfterExit = true;
script = ''
if test -f "${cfg.bind.passwordFile}" ; then
umask 0077
conf="$(mktemp)"
printf 'bindpw %s\n' "$(cat ${cfg.bind.passwordFile})" |
cat ${ldapConfig.source} - >"$conf"
mv -fT "$conf" /etc/ldap.conf
fi
'';
};
};
})
};
(mkIf cfg.daemon.enable {
nslcd = {
wantedBy = [ "multi-user.target" ];
preStart = ''
umask 0077
conf="$(mktemp)"
{
cat ${nslcdConfig}
test -z '${cfg.bind.distinguishedName}' -o ! -f '${cfg.bind.passwordFile}' ||
printf 'bindpw %s\n' "$(cat '${cfg.bind.passwordFile}')"
test -z '${cfg.daemon.rootpwmoddn}' -o ! -f '${cfg.daemon.rootpwmodpwFile}' ||
printf 'rootpwmodpw %s\n' "$(cat '${cfg.daemon.rootpwmodpwFile}')"
} >"$conf"
mv -fT "$conf" /run/nslcd/nslcd.conf
'';
restartTriggers = [
nslcdConfig
cfg.bind.passwordFile
cfg.daemon.rootpwmodpwFile
];
serviceConfig = {
ExecStart = "${nslcdWrapped}/bin/nslcd";
Type = "forking";
Restart = "always";
User = "nslcd";
Group = "nslcd";
RuntimeDirectory = [ "nslcd" ];
PIDFile = "/run/nslcd/nslcd.pid";
AmbientCapabilities = "CAP_SYS_RESOURCE";
};
};
})
];
};

View File

@ -12,7 +12,6 @@ let
mkDefault
mkIf
mkOption
stringAfter
types
;

View File

@ -34,6 +34,7 @@ with lib;
ffmpeg_5 = super.ffmpeg_5.override { ffmpegVariant = "headless"; };
# dep of graphviz, libXpm is optional for Xpm support
gd = super.gd.override { withXorg = false; };
ghostscript = super.ghostscript.override { cupsSupport = false; x11Support = false; };
gobject-introspection = super.gobject-introspection.override { x11Support = false; };
gpsd = super.gpsd.override { guiSupport = false; };
graphviz = super.graphviz-nox;

View File

@ -31,16 +31,18 @@ in
};
in types.submodule {
freeformType = types.attrsOf sysctlOption;
options."net.core.rmem_max" = mkOption {
type = types.nullOr highestValueType;
default = null;
description = lib.mdDoc "The maximum socket receive buffer size. In case of conflicting values, the highest will be used.";
};
options = {
"net.core.rmem_max" = mkOption {
type = types.nullOr highestValueType;
default = null;
description = lib.mdDoc "The maximum receive socket buffer size in bytes. In case of conflicting values, the highest will be used.";
};
options."net.core.wmem_max" = mkOption {
type = types.nullOr highestValueType;
default = null;
description = lib.mdDoc "The maximum socket send buffer size. In case of conflicting values, the highest will be used.";
"net.core.wmem_max" = mkOption {
type = types.nullOr highestValueType;
default = null;
description = lib.mdDoc "The maximum send socket buffer size in bytes. In case of conflicting values, the highest will be used.";
};
};
};
default = {};

View File

@ -39,9 +39,10 @@ in
hardware.firmware = [ package.fw ];
system.activationScripts.setup-amdgpu-pro = ''
ln -sfn ${package}/opt/amdgpu{,-pro} /run
'';
systemd.tmpfiles.settings.amdgpu-pro = {
"/run/amdgpu"."L+".argument = "${package}/opt/amdgpu";
"/run/amdgpu-pro"."L+".argument = "${package}/opt/amdgpu-pro";
};
system.requiredKernelConfig = with config.lib.kernelConfig; [
(isYes "DEVICE_PRIVATE")

View File

@ -19,6 +19,14 @@ in
Enabled Fcitx5 addons.
'';
};
waylandFrontend = mkOption {
type = types.bool;
default = false;
description = lib.mdDoc ''
Use the Wayland input method frontend.
See [Using Fcitx 5 on Wayland](https://fcitx-im.org/wiki/Using_Fcitx_5_on_Wayland).
'';
};
quickPhrase = mkOption {
type = with types; attrsOf str;
default = { };
@ -118,10 +126,11 @@ in
];
environment.variables = {
GTK_IM_MODULE = "fcitx";
QT_IM_MODULE = "fcitx";
XMODIFIERS = "@im=fcitx";
QT_PLUGIN_PATH = [ "${fcitx5Package}/${pkgs.qt6.qtbase.qtPluginPrefix}" ];
} // lib.optionalAttrs (!cfg.waylandFrontend) {
GTK_IM_MODULE = "fcitx";
QT_IM_MODULE = "fcitx";
} // lib.optionalAttrs cfg.ignoreUserConfig {
SKIP_FCITX_USER_PATH = "1";
};

View File

@ -231,7 +231,8 @@ in
# even if you've upgraded your system to a new NixOS release.
#
# This value does NOT affect the Nixpkgs version your packages and OS are pulled from,
# so changing it will NOT upgrade your system.
# so changing it will NOT upgrade your system - see https://nixos.org/manual/nixos/stable/#sec-upgrading for how
# to actually do that.
#
# This value being lower than the current NixOS release does NOT mean your system is
# out of date, out of support, or vulnerable.

View File

@ -77,7 +77,11 @@ let
libPath = filter (pkgs.path + "/lib");
pkgsLibPath = filter (pkgs.path + "/pkgs/pkgs-lib");
nixosPath = filter (pkgs.path + "/nixos");
modules = map (p: ''"${removePrefix "${modulesPath}/" (toString p)}"'') docModules.lazy;
modules =
"[ "
+ concatMapStringsSep " " (p: ''"${removePrefix "${modulesPath}/" (toString p)}"'') docModules.lazy
+ " ]";
passAsFile = [ "modules" ];
} ''
export NIX_STORE_DIR=$TMPDIR/store
export NIX_STATE_DIR=$TMPDIR/state
@ -87,7 +91,7 @@ let
--argstr libPath "$libPath" \
--argstr pkgsLibPath "$pkgsLibPath" \
--argstr nixosPath "$nixosPath" \
--arg modules "[ $modules ]" \
--arg modules "import $modulesPath" \
--argstr stateVersion "${options.system.stateVersion.default}" \
--argstr release "${config.system.nixos.release}" \
$nixosPath/lib/eval-cacheable-options.nix > $out \

View File

@ -723,6 +723,7 @@
./services/misc/nzbget.nix
./services/misc/nzbhydra2.nix
./services/misc/octoprint.nix
./services/misc/ollama.nix
./services/misc/ombi.nix
./services/misc/osrm.nix
./services/misc/owncast.nix

View File

@ -18,7 +18,7 @@ in
settings = mkOption {
type = settingsFormat.type;
default = {};
default = { };
description = lib.mdDoc ''
System-wide configuration for GameMode (/etc/gamemode.ini).
See gamemoded(8) man page for available settings.

View File

@ -14,7 +14,7 @@ with lib;
description = "Linux Audit daemon";
wantedBy = [ "basic.target" ];
before = [ "shutdown.target" ];
conflicts = [ "shutdown.target "];
conflicts = [ "shutdown.target" ];
unitConfig = {
ConditionVirtualization = "!container";

View File

@ -181,25 +181,33 @@ in {
'';
};
system.activationScripts.ipa = stringAfter ["etc"] ''
# libcurl requires a hard copy of the certificate
if ! ${pkgs.diffutils}/bin/diff ${cfg.certificate} /etc/ipa/ca.crt > /dev/null 2>&1; then
rm -f /etc/ipa/ca.crt
cp ${cfg.certificate} /etc/ipa/ca.crt
fi
systemd.services."ipa-activation" = {
wantedBy = [ "sysinit.target" ];
before = [ "sysinit.target" "shutdown.target" ];
conflicts = [ "shutdown.target" ];
unitConfig.DefaultDependencies = false;
serviceConfig.Type = "oneshot";
serviceConfig.RemainAfterExit = true;
script = ''
# libcurl requires a hard copy of the certificate
if ! ${pkgs.diffutils}/bin/diff ${cfg.certificate} /etc/ipa/ca.crt > /dev/null 2>&1; then
rm -f /etc/ipa/ca.crt
cp ${cfg.certificate} /etc/ipa/ca.crt
fi
if [ ! -f /etc/krb5.keytab ]; then
cat <<EOF
if [ ! -f /etc/krb5.keytab ]; then
cat <<EOF
In order to complete FreeIPA integration, please join the domain by completing the following steps:
1. Authenticate as an IPA user authorized to join new hosts, e.g. kinit admin@${cfg.realm}
2. Join the domain and obtain the keytab file: ipa-join
3. Install the keytab file: sudo install -m 600 krb5.keytab /etc/
4. Restart sssd systemd service: sudo systemctl restart sssd
In order to complete FreeIPA integration, please join the domain by completing the following steps:
1. Authenticate as an IPA user authorized to join new hosts, e.g. kinit admin@${cfg.realm}
2. Join the domain and obtain the keytab file: ipa-join
3. Install the keytab file: sudo install -m 600 krb5.keytab /etc/
4. Restart sssd systemd service: sudo systemctl restart sssd
EOF
fi
'';
EOF
fi
'';
};
services.sssd.config = ''
[domain/${cfg.domain}]

View File

@ -280,6 +280,7 @@ in
wantedBy = [ "sysinit.target" ];
before = [ "sysinit.target" "shutdown.target" ];
conflicts = [ "shutdown.target" ];
after = [ "systemd-sysusers.service" ];
unitConfig.DefaultDependencies = false;
unitConfig.RequiresMountsFor = [ "/nix/store" "/run/wrappers" ];
serviceConfig.Type = "oneshot";

View File

@ -143,20 +143,15 @@ let
};
# Paths listed in ReadWritePaths must exist before service is started
mkActivationScript = name: cfg:
mkTmpfiles = name: cfg:
let
install = "install -o ${cfg.user} -g ${cfg.group}";
in
nameValuePair "borgbackup-job-${name}" (stringAfter [ "users" ] (''
# Ensure that the home directory already exists
# We can't assert createHome == true because that's not the case for root
cd "${config.users.users.${cfg.user}.home}"
# Create each directory separately to prevent root owned parent dirs
${install} -d .config .config/borg
${install} -d .cache .cache/borg
'' + optionalString (isLocalPath cfg.repo && !cfg.removableDevice) ''
${install} -d ${escapeShellArg cfg.repo}
''));
settings = { inherit (cfg) user group; };
in lib.nameValuePair "borgbackup-job-${name}" ({
"${config.users.users."${cfg.user}".home}/.config/borg".d = settings;
"${config.users.users."${cfg.user}".home}/.cache/borg".d = settings;
} // optionalAttrs (isLocalPath cfg.repo && !cfg.removableDevice) {
"${cfg.repo}".d = settings;
});
mkPassAssertion = name: cfg: {
assertion = with cfg.encryption;
@ -760,7 +755,7 @@ in {
++ mapAttrsToList mkSourceAssertions jobs
++ mapAttrsToList mkRemovableDeviceAssertions jobs;
system.activationScripts = mapAttrs' mkActivationScript jobs;
systemd.tmpfiles.settings = mapAttrs' mkTmpfiles jobs;
systemd.services =
# A job named "foo" is mapped to systemd.services.borgbackup-job-foo

View File

@ -305,5 +305,5 @@ in {
'')
];
meta.maintainers = with lib.maintainers; [ mic92 lopsided98 ];
meta.maintainers = lib.teams.buildbot.members;
}

View File

@ -188,6 +188,6 @@ in {
};
};
meta.maintainers = with lib.maintainers; [ ];
meta.maintainers = lib.teams.buildbot.members;
}

View File

@ -108,6 +108,11 @@ in
};
users.groups.aerospike.gid = config.ids.gids.aerospike;
boot.kernel.sysctl = {
"net.core.rmem_max" = mkDefault 15728640;
"net.core.wmem_max" = mkDefault 5242880;
};
systemd.services.aerospike = rec {
description = "Aerospike server";
@ -131,14 +136,6 @@ in
echo "kernel.shmmax too low, setting to 1GB"
${pkgs.procps}/bin/sysctl -w kernel.shmmax=1073741824
fi
if [ $(echo "$(cat /proc/sys/net/core/rmem_max) < 15728640" | ${pkgs.bc}/bin/bc) == "1" ]; then
echo "increasing socket buffer limit (/proc/sys/net/core/rmem_max): $(cat /proc/sys/net/core/rmem_max) -> 15728640"
echo 15728640 > /proc/sys/net/core/rmem_max
fi
if [ $(echo "$(cat /proc/sys/net/core/wmem_max) < 5242880" | ${pkgs.bc}/bin/bc) == "1" ]; then
echo "increasing socket buffer limit (/proc/sys/net/core/wmem_max): $(cat /proc/sys/net/core/wmem_max) -> 5242880"
echo 5242880 > /proc/sys/net/core/wmem_max
fi
install -d -m0700 -o ${serviceConfig.User} -g ${serviceConfig.Group} "${cfg.workDir}"
install -d -m0700 -o ${serviceConfig.User} -g ${serviceConfig.Group} "${cfg.workDir}/smd"
install -d -m0700 -o ${serviceConfig.User} -g ${serviceConfig.Group} "${cfg.workDir}/udf"

View File

@ -78,7 +78,13 @@ let
mkName = name: "kanata-${name}";
mkDevices = devices:
optionalString ((length devices) > 0) "linux-dev ${concatStringsSep ":" devices}";
let
devicesString = pipe devices [
(map (device: "\"" + device + "\""))
(concatStringsSep " ")
];
in
optionalString ((length devices) > 0) "linux-dev (${devicesString})";
mkConfig = name: keyboard: pkgs.writeText "${mkName name}-config.kdb" ''
(defcfg

View File

@ -4,7 +4,7 @@ with lib;
let
pkg = pkgs.sane-backends.override {
pkg = config.hardware.sane.backends-package.override {
scanSnapDriversUnfree = config.hardware.sane.drivers.scanSnap.enable;
scanSnapDriversPackage = config.hardware.sane.drivers.scanSnap.package;
};
@ -57,6 +57,13 @@ in
'';
};
hardware.sane.backends-package = mkOption {
type = types.package;
default = pkgs.sane-backends;
defaultText = literalExpression "pkgs.sane-backends";
description = lib.mdDoc "Backends driver package to use.";
};
hardware.sane.snapshot = mkOption {
type = types.bool;
default = false;

View File

@ -19,6 +19,12 @@ in
'';
};
ignoreCpuidCheck = mkOption {
type = types.bool;
default = false;
description = lib.mdDoc "Whether to ignore the cpuid check to allow running on unsupported platforms";
};
configFile = mkOption {
type = types.nullOr types.path;
default = null;
@ -42,6 +48,7 @@ in
${cfg.package}/sbin/thermald \
--no-daemon \
${optionalString cfg.debug "--loglevel=debug"} \
${optionalString cfg.ignoreCpuidCheck "--ignore-cpuid-check"} \
${optionalString (cfg.configFile != null) "--config-file ${cfg.configFile}"} \
--dbus-enable \
--adaptive

View File

@ -1,18 +1,15 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.vdr;
libDir = "/var/lib/vdr";
in {
###### interface
inherit (lib)
mkEnableOption mkPackageOption mkOption types mkIf optional mdDoc;
in
{
options = {
services.vdr = {
enable = mkEnableOption (lib.mdDoc "VDR. Please put config into ${libDir}");
enable = mkEnableOption (mdDoc "Start VDR");
package = mkPackageOption pkgs "vdr" {
example = "wrapVdr.override { plugins = with pkgs.vdrPlugins; [ hello ]; }";
@ -21,58 +18,84 @@ in {
videoDir = mkOption {
type = types.path;
default = "/srv/vdr/video";
description = lib.mdDoc "Recording directory";
description = mdDoc "Recording directory";
};
extraArguments = mkOption {
type = types.listOf types.str;
default = [];
description = lib.mdDoc "Additional command line arguments to pass to VDR.";
default = [ ];
description = mdDoc "Additional command line arguments to pass to VDR.";
};
enableLirc = mkEnableOption (lib.mdDoc "LIRC");
enableLirc = mkEnableOption (mdDoc "LIRC");
user = mkOption {
type = types.str;
default = "vdr";
description = mdDoc ''
User under which the VDR service runs.
'';
};
group = mkOption {
type = types.str;
default = "vdr";
description = mdDoc ''
Group under which the VDRvdr service runs.
'';
};
};
};
###### implementation
config = mkIf cfg.enable {
config = mkIf cfg.enable (mkMerge [{
systemd.tmpfiles.rules = [
"d ${cfg.videoDir} 0755 vdr vdr -"
"Z ${cfg.videoDir} - vdr vdr -"
"d ${cfg.videoDir} 0755 ${cfg.user} ${cfg.group} -"
"Z ${cfg.videoDir} - ${cfg.user} ${cfg.group} -"
];
systemd.services.vdr = {
description = "VDR";
wantedBy = [ "multi-user.target" ];
wants = optional cfg.enableLirc "lircd.service";
after = [ "network.target" ]
++ optional cfg.enableLirc "lircd.service";
serviceConfig = {
ExecStart = ''
${cfg.package}/bin/vdr \
--video="${cfg.videoDir}" \
--config="${libDir}" \
${escapeShellArgs cfg.extraArguments}
'';
User = "vdr";
ExecStart =
let
args = [
"--video=${cfg.videoDir}"
]
++ optional cfg.enableLirc "--lirc=${config.passthru.lirc.socket}"
++ cfg.extraArguments;
in
"${cfg.package}/bin/vdr ${lib.escapeShellArgs args}";
User = cfg.user;
Group = cfg.group;
CacheDirectory = "vdr";
StateDirectory = "vdr";
RuntimeDirectory = "vdr";
Restart = "on-failure";
};
};
users.users.vdr = {
group = "vdr";
home = libDir;
isSystemUser = true;
environment.systemPackages = [ cfg.package ];
users.users = mkIf (cfg.user == "vdr") {
vdr = {
inherit (cfg) group;
home = "/run/vdr";
isSystemUser = true;
extraGroups = [
"video"
"audio"
]
++ optional cfg.enableLirc "lirc";
};
};
users.groups.vdr = {};
}
users.groups = mkIf (cfg.group == "vdr") { vdr = { }; };
(mkIf cfg.enableLirc {
services.lirc.enable = true;
users.users.vdr.extraGroups = [ "lirc" ];
services.vdr.extraArguments = [
"--lirc=${config.passthru.lirc.socket}"
];
})]);
};
}

View File

@ -220,10 +220,16 @@ in
logcheck = {};
};
system.activationScripts.logcheck = ''
mkdir -m 700 -p /var/{lib,lock}/logcheck
chown ${cfg.user} /var/{lib,lock}/logcheck
'';
systemd.tmpfiles.settings.logcheck = {
"/var/lib/logcheck".d = {
mode = "700";
inherit (cfg) user;
};
"/var/lock/logcheck".d = {
mode = "700";
inherit (cfg) user;
};
};
services.cron.systemCronJobs =
let withTime = name: {timeArgs, ...}: timeArgs != null;

View File

@ -747,7 +747,7 @@ in
${concatStringsSep "\n" (mapAttrsToList (to: from: ''
ln -sf ${from} /var/lib/postfix/conf/${to}
${pkgs.postfix}/bin/postalias /var/lib/postfix/conf/${to}
${pkgs.postfix}/bin/postalias -o -p /var/lib/postfix/conf/${to}
'') cfg.aliasFiles)}
${concatStringsSep "\n" (mapAttrsToList (to: from: ''
ln -sf ${from} /var/lib/postfix/conf/${to}

View File

@ -1,10 +1,14 @@
{ config, lib, pkgs, ... }:
let
cfg = config.services.matrix-synapse.sliding-sync;
cfg = config.services.matrix-sliding-sync;
in
{
options.services.matrix-synapse.sliding-sync = {
imports = [
(lib.mkRenamedOptionModule [ "services" "matrix-synapse" "sliding-sync" ] [ "services" "matrix-sliding-sync" ])
];
options.services.matrix-sliding-sync = {
enable = lib.mkEnableOption (lib.mdDoc "sliding sync");
package = lib.mkPackageOption pkgs "matrix-sliding-sync" { };
@ -83,6 +87,7 @@ in
systemd.services.matrix-sliding-sync = rec {
after =
lib.optional cfg.createDatabase "postgresql.service"
++ lib.optional config.services.dendrite.enable "dendrite.service"
++ lib.optional config.services.matrix-synapse.enable config.services.matrix-synapse.serviceUnit;
wants = after;
wantedBy = [ "multi-user.target" ];

View File

@ -22,11 +22,19 @@ let
})
(builtins.genList guixBuildUser numberOfUsers));
# A set of Guix user profiles to be linked at activation.
# A set of Guix user profiles to be linked at activation. All of these should
# be default profiles managed by Guix CLI and the profiles are located in
# `${cfg.stateDir}/profiles/per-user/$USER/$PROFILE`.
guixUserProfiles = {
# The current Guix profile that is created through `guix pull`.
# The default Guix profile managed by `guix pull`. Take note this should be
# the profile with the most precedence in `PATH` env to let users use their
# updated versions of `guix` CLI.
"current-guix" = "\${XDG_CONFIG_HOME}/guix/current";
# The default Guix home profile. This profile contains more than exports
# such as an activation script at `$GUIX_HOME_PROFILE/activate`.
"guix-home" = "$HOME/.guix-home/profile";
# The default Guix profile similar to $HOME/.nix-profile from Nix.
"guix-profile" = "$HOME/.guix-profile";
};
@ -256,20 +264,31 @@ in
# ephemeral setups where only certain part of the filesystem is
# persistent (e.g., "Erase my darlings"-type of setup).
system.userActivationScripts.guix-activate-user-profiles.text = let
guixProfile = profile: "${cfg.stateDir}/guix/profiles/per-user/\${USER}/${profile}";
linkProfile = profile: location: let
userProfile = guixProfile profile;
in ''
[ -d "${userProfile}" ] && ln -sfn "${userProfile}" "${location}"
'';
linkProfileToPath = acc: profile: location: let
guixProfile = "${cfg.stateDir}/guix/profiles/per-user/\${USER}/${profile}";
in acc + ''
[ -d "${guixProfile}" ] && [ -L "${location}" ] || ln -sf "${guixProfile}" "${location}"
'';
in acc + (linkProfile profile location);
activationScript = lib.foldlAttrs linkProfileToPath "" guixUserProfiles;
# This should contain export-only Guix user profiles. The rest of it is
# handled manually in the activation script.
guixUserProfiles' = lib.attrsets.removeAttrs guixUserProfiles [ "guix-home" ];
linkExportsScript = lib.foldlAttrs linkProfileToPath "" guixUserProfiles';
in ''
# Don't export this please! It is only expected to be used for this
# activation script and nothing else.
XDG_CONFIG_HOME=''${XDG_CONFIG_HOME:-$HOME/.config}
# Linking the usual Guix profiles into the home directory.
${activationScript}
${linkExportsScript}
# Activate all of the default Guix non-exports profiles manually.
${linkProfile "guix-home" "$HOME/.guix-home"}
[ -L "$HOME/.guix-home" ] && "$HOME/.guix-home/activate"
'';
# GUIX_LOCPATH is basically LOCPATH but for Guix libc which in turn used by

View File

@ -0,0 +1,42 @@
{ config, lib, pkgs, ... }: let
cfg = config.services.ollama;
in {
options = {
services.ollama = {
enable = lib.mkEnableOption (
lib.mdDoc "Server for local large language models"
);
package = lib.mkPackageOption pkgs "ollama" { };
};
};
config = lib.mkIf cfg.enable {
systemd = {
services.ollama = {
wantedBy = [ "multi-user.target" ];
description = "Server for local large language models";
after = [ "network.target" ];
environment = {
HOME = "%S/ollama";
OLLAMA_MODELS = "%S/ollama/models";
};
serviceConfig = {
ExecStart = "${lib.getExe cfg.package} serve";
WorkingDirectory = "/var/lib/ollama";
StateDirectory = [ "ollama" ];
DynamicUser = true;
};
};
};
environment.systemPackages = [ cfg.package ];
};
meta.maintainers = with lib.maintainers; [ onny ];
}

View File

@ -10,7 +10,7 @@ let
defaultFont = "${pkgs.liberation_ttf}/share/fonts/truetype/LiberationSerif-Regular.ttf";
# Don't start a redis instance if the user sets a custom redis connection
enableRedis = !hasAttr "PAPERLESS_REDIS" cfg.extraConfig;
enableRedis = !(cfg.settings ? PAPERLESS_REDIS);
redisServer = config.services.redis.servers.paperless;
env = {
@ -24,9 +24,11 @@ let
PAPERLESS_TIME_ZONE = config.time.timeZone;
} // optionalAttrs enableRedis {
PAPERLESS_REDIS = "unix://${redisServer.unixSocket}";
} // (
lib.mapAttrs (_: toString) cfg.extraConfig
);
} // (lib.mapAttrs (_: s:
if (lib.isAttrs s || lib.isList s) then builtins.toJSON s
else if lib.isBool s then lib.boolToString s
else toString s
) cfg.settings);
manage = pkgs.writeShellScript "manage" ''
set -o allexport # Export the following env vars
@ -82,6 +84,7 @@ in
imports = [
(mkRenamedOptionModule [ "services" "paperless-ng" ] [ "services" "paperless" ])
(mkRenamedOptionModule [ "services" "paperless" "extraConfig" ] [ "services" "paperless" "settings" ])
];
options.services.paperless = {
@ -160,32 +163,30 @@ in
description = lib.mdDoc "Web interface port.";
};
# FIXME this should become an RFC42-style settings attr
extraConfig = mkOption {
type = types.attrs;
settings = mkOption {
type = lib.types.submodule {
freeformType = with lib.types; attrsOf (let
typeList = [ bool float int str path package ];
in oneOf (typeList ++ [ (listOf (oneOf typeList)) (attrsOf (oneOf typeList)) ]));
};
default = { };
description = lib.mdDoc ''
Extra paperless config options.
See [the documentation](https://docs.paperless-ngx.com/configuration/)
for available options.
See [the documentation](https://docs.paperless-ngx.com/configuration/) for available options.
Note that some options such as `PAPERLESS_CONSUMER_IGNORE_PATTERN` expect JSON values. Use `builtins.toJSON` to ensure proper quoting.
Note that some settings such as `PAPERLESS_CONSUMER_IGNORE_PATTERN` expect JSON values.
Settings declared as lists or attrsets will automatically be serialised into JSON strings for your convenience.
'';
example = literalExpression ''
{
PAPERLESS_OCR_LANGUAGE = "deu+eng";
PAPERLESS_DBHOST = "/run/postgresql";
PAPERLESS_CONSUMER_IGNORE_PATTERN = builtins.toJSON [ ".DS_STORE/*" "desktop.ini" ];
PAPERLESS_OCR_USER_ARGS = builtins.toJSON {
optimize = 1;
pdfa_image_compression = "lossless";
};
example = {
PAPERLESS_OCR_LANGUAGE = "deu+eng";
PAPERLESS_DBHOST = "/run/postgresql";
PAPERLESS_CONSUMER_IGNORE_PATTERN = [ ".DS_STORE/*" "desktop.ini" ];
PAPERLESS_OCR_USER_ARGS = {
optimize = 1;
pdfa_image_compression = "lossless";
};
'';
};
};
user = mkOption {

View File

@ -102,7 +102,9 @@ in
ldap = {
package = mkOption {
type = types.package;
# needs openldap built with a libxcrypt that support crypt sha256 until https://github.com/majewsky/portunus/issues/2 is solved
# needs openldap built with a libxcrypt that support crypt sha256 until users have had time to migrate to newer hashes
# Ref: <https://github.com/majewsky/portunus/issues/2>
# TODO: remove in NixOS 24.11 (cf. same note on pkgs/servers/portunus/default.nix)
default = pkgs.openldap.override { libxcrypt = pkgs.libxcrypt-legacy; };
defaultText = lib.literalExpression "pkgs.openldap.override { libxcrypt = pkgs.libxcrypt-legacy; }";
description = lib.mdDoc "The OpenLDAP package to use.";
@ -247,6 +249,7 @@ in
acmeDirectory = config.security.acme.certs."${cfg.domain}".directory;
in
{
PORTUNUS_SERVER_HTTP_SECURE = "true";
PORTUNUS_SLAPD_TLS_CA_CERTIFICATE = "/etc/ssl/certs/ca-certificates.crt";
PORTUNUS_SLAPD_TLS_CERTIFICATE = "${acmeDirectory}/cert.pem";
PORTUNUS_SLAPD_TLS_DOMAIN_NAME = cfg.domain;

View File

@ -53,7 +53,7 @@ in
enable = mkEnableOption (lib.mdDoc "Redmine");
package = mkPackageOption pkgs "redmine" {
example = "redmine.override { ruby = pkgs.ruby_2_7; }";
example = "redmine.override { ruby = pkgs.ruby_3_2; }";
};
user = mkOption {

View File

@ -64,6 +64,7 @@ let
"pgbouncer"
"php-fpm"
"pihole"
"ping"
"postfix"
"postgres"
"process"

View File

@ -0,0 +1,48 @@
{ config, lib, pkgs, options }:
with lib;
let
cfg = config.services.prometheus.exporters.ping;
settingsFormat = pkgs.formats.yaml {};
configFile = settingsFormat.generate "config.yml" cfg.settings;
in
{
port = 9427;
extraOpts = {
telemetryPath = mkOption {
type = types.str;
default = "/metrics";
description = ''
Path under which to expose metrics.
'';
};
settings = mkOption {
type = settingsFormat.type;
default = {};
description = lib.mdDoc ''
Configuration for ping_exporter, see
<https://github.com/czerwonk/ping_exporter>
for supported values.
'';
};
};
serviceOpts = {
serviceConfig = {
# ping-exporter needs `CAP_NET_RAW` to run as non root https://github.com/czerwonk/ping_exporter#running-as-non-root-user
CapabilityBoundingSet = [ "CAP_NET_RAW" ];
AmbientCapabilities = [ "CAP_NET_RAW" ];
ExecStart = ''
${pkgs.prometheus-ping-exporter}/bin/ping_exporter \
--web.listen-address ${cfg.listenAddress}:${toString cfg.port} \
--web.telemetry-path ${cfg.telemetryPath} \
--config.path="${configFile}" \
${concatStringsSep " \\\n " cfg.extraFlags}
'';
};
};
}

View File

@ -394,9 +394,8 @@ let
Maximum number of queries processed concurrently by query node.
'';
query.replica-labels = mkAttrsParam "query.replica-label" ''
query.replica-labels = mkListParam "query.replica-label" ''
Labels to treat as a replica indicator along which data is
deduplicated.
Still you will be able to query without deduplication using

View File

@ -3,6 +3,7 @@
let
cfg = config.services.eris-server;
stateDirectoryPath = "\${STATE_DIRECTORY}";
nullOrStr = with lib.types; nullOr str;
in {
options.services.eris-server = {
@ -26,7 +27,7 @@ in {
};
listenCoap = lib.mkOption {
type = lib.types.str;
type = nullOrStr;
default = ":5683";
example = "[::1]:5683";
description = ''
@ -39,8 +40,8 @@ in {
};
listenHttp = lib.mkOption {
type = lib.types.str;
default = "";
type = nullOrStr;
default = null;
example = "[::1]:8080";
description = "Server HTTP listen address. Do not listen by default.";
};
@ -58,8 +59,8 @@ in {
};
mountpoint = lib.mkOption {
type = lib.types.str;
default = "";
type = nullOrStr;
default = null;
example = "/eris";
description = ''
Mountpoint for FUSE namespace that exposes "urn:eris:" files.
@ -69,33 +70,44 @@ in {
};
config = lib.mkIf cfg.enable {
assertions = [{
assertion = lib.strings.versionAtLeast cfg.package.version "20231219";
message =
"Version of `config.services.eris-server.package` is incompatible with this module";
}];
systemd.services.eris-server = let
cmd =
"${cfg.package}/bin/eris-go server --coap '${cfg.listenCoap}' --http '${cfg.listenHttp}' ${
lib.optionalString cfg.decode "--decode "
}${
lib.optionalString (cfg.mountpoint != "")
''--mountpoint "${cfg.mountpoint}" ''
}${lib.strings.escapeShellArgs cfg.backends}";
cmd = "${cfg.package}/bin/eris-go server"
+ (lib.optionalString (cfg.listenCoap != null)
" --coap '${cfg.listenCoap}'")
+ (lib.optionalString (cfg.listenHttp != null)
" --http '${cfg.listenHttp}'")
+ (lib.optionalString cfg.decode " --decode")
+ (lib.optionalString (cfg.mountpoint != null)
" --mountpoint '${cfg.mountpoint}'");
in {
description = "ERIS block server";
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
script = lib.mkIf (cfg.mountpoint != "") ''
environment.ERIS_STORE_URL = toString cfg.backends;
script = lib.mkIf (cfg.mountpoint != null) ''
export PATH=${config.security.wrapperDir}:$PATH
${cmd}
'';
serviceConfig = let
umounter = lib.mkIf (cfg.mountpoint != "")
umounter = lib.mkIf (cfg.mountpoint != null)
"-${config.security.wrapperDir}/fusermount -uz ${cfg.mountpoint}";
in {
ExecStartPre = umounter;
ExecStart = lib.mkIf (cfg.mountpoint == "") cmd;
ExecStopPost = umounter;
Restart = "always";
RestartSec = 20;
AmbientCapabilities = "CAP_NET_BIND_SERVICE";
};
in if (cfg.mountpoint == null) then {
ExecStart = cmd;
} else
{
ExecStartPre = umounter;
ExecStopPost = umounter;
} // {
Restart = "always";
RestartSec = 20;
AmbientCapabilities = "CAP_NET_BIND_SERVICE";
};
};
};

View File

@ -217,7 +217,7 @@ with lib;
inherit RuntimeDirectory;
inherit StateDirectory;
Type = "oneshot";
ExecStartPre = "!${pkgs.writeShellScript "ddclient-prestart" preStart}";
ExecStartPre = [ "!${pkgs.writeShellScript "ddclient-prestart" preStart}" ];
ExecStart = "${lib.getExe cfg.package} -file /run/${RuntimeDirectory}/ddclient.conf";
};
};

View File

@ -674,7 +674,11 @@ in
(lport: "sshd -G -T -C lport=${toString lport} -f ${sshconf} > /dev/null")
cfg.ports}
${concatMapStringsSep "\n"
(la: "sshd -G -T -C ${escapeShellArg "laddr=${la.addr},lport=${toString la.port}"} -f ${sshconf} > /dev/null")
(la:
concatMapStringsSep "\n"
(port: "sshd -G -T -C ${escapeShellArg "laddr=${la.addr},lport=${toString port}"} -f ${sshconf} > /dev/null")
(if la.port != null then [ la.port ] else cfg.ports)
)
cfg.listenAddresses}
touch $out
'')

View File

@ -100,8 +100,8 @@ in {
};
systemd.services.tailscaled-autoconnect = mkIf (cfg.authKeyFile != null) {
after = ["tailscale.service"];
wants = ["tailscale.service"];
after = ["tailscaled.service"];
wants = ["tailscaled.service"];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
Type = "oneshot";

View File

@ -137,16 +137,24 @@ in
message = "networking.enableIPv6 must be true for yggdrasil to work";
}];
system.activationScripts.yggdrasil = mkIf cfg.persistentKeys ''
if [ ! -e ${keysPath} ]
then
mkdir --mode=700 -p ${builtins.dirOf keysPath}
${binYggdrasil} -genconf -json \
| ${pkgs.jq}/bin/jq \
'to_entries|map(select(.key|endswith("Key")))|from_entries' \
> ${keysPath}
fi
'';
# This needs to be a separate service. The yggdrasil service fails if
# this is put into its preStart.
systemd.services.yggdrasil-persistent-keys = lib.mkIf cfg.persistentKeys {
wantedBy = [ "multi-user.target" ];
before = [ "yggdrasil.service" ];
serviceConfig.Type = "oneshot";
serviceConfig.RemainAfterExit = true;
script = ''
if [ ! -e ${keysPath} ]
then
mkdir --mode=700 -p ${builtins.dirOf keysPath}
${binYggdrasil} -genconf -json \
| ${pkgs.jq}/bin/jq \
'to_entries|map(select(.key|endswith("Key")))|from_entries' \
> ${keysPath}
fi
'';
};
systemd.services.yggdrasil = {
description = "Yggdrasil Network Service";

View File

@ -45,19 +45,25 @@ in
systemd.services.munged = {
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
wants = [
"network-online.target"
"time-sync.target"
];
after = [
"network-online.target"
"time-sync.target"
];
path = [ pkgs.munge pkgs.coreutils ];
serviceConfig = {
ExecStartPre = "+${pkgs.coreutils}/bin/chmod 0400 ${cfg.password}";
ExecStart = "${pkgs.munge}/bin/munged --syslog --key-file ${cfg.password}";
PIDFile = "/run/munge/munged.pid";
ExecReload = "${pkgs.coreutils}/bin/kill -HUP $MAINPID";
ExecStart = "${pkgs.munge}/bin/munged --foreground --key-file ${cfg.password}";
User = "munge";
Group = "munge";
StateDirectory = "munge";
StateDirectoryMode = "0711";
Restart = "on-failure";
RuntimeDirectory = "munge";
};

View File

@ -854,7 +854,7 @@ in
BridgeRelay = true;
ExtORPort.port = mkDefault "auto";
ServerTransportPlugin.transports = mkDefault ["obfs4"];
ServerTransportPlugin.exec = mkDefault "${pkgs.obfs4}/bin/obfs4proxy managed";
ServerTransportPlugin.exec = mkDefault "${lib.getExe pkgs.obfs4} managed";
} // optionalAttrs (cfg.relay.role == "private-bridge") {
ExtraInfoStatistics = false;
PublishServerDescriptor = false;

View File

@ -23,6 +23,14 @@ in
'';
};
signingKeyFile = mkOption {
type = types.nullOr types.path;
description = lib.mdDoc ''
Optional file containing a self-managed signing key to sign uploaded store paths.
'';
default = null;
};
compressionLevel = mkOption {
type = types.nullOr types.int;
description = lib.mdDoc "The compression level for ZSTD compression (between 0 and 16)";
@ -69,7 +77,8 @@ in
DynamicUser = true;
LoadCredential = [
"cachix-token:${toString cfg.cachixTokenFile}"
];
]
++ lib.optional (cfg.signingKeyFile != null) "signing-key:${toString cfg.signingKeyFile}";
};
script =
let
@ -80,6 +89,7 @@ in
in
''
export CACHIX_AUTH_TOKEN="$(<"$CREDENTIALS_DIRECTORY/cachix-token")"
${lib.optionalString (cfg.signingKeyFile != null) ''export CACHIX_SIGNING_KEY="$(<"$CREDENTIALS_DIRECTORY/signing-key")"''}
${lib.escapeShellArgs command}
'';
};

View File

@ -27,7 +27,7 @@ in
config = lib.mkIf cfg.enable {
system.requiredKernelConfig = with config.lib.kernelConfig; [
(isModule "ZRAM")
(isEnabled "ZRAM")
];
systemd.packages = [ cfg.package ];

View File

@ -155,8 +155,9 @@ let
to work, the username used to connect to PostgreSQL must match the database name, that is
services.invidious.settings.db.user must match services.invidious.settings.db.dbname.
This is the default since NixOS 24.05. For older systems, it is normally safe to manually set
services.invidious.database.user to "invidious" as the new user will be created with permissions
for the existing database. `REASSIGN OWNED BY kemal TO invidious;` may also be needed.
the user to "invidious" as the new user will be created with permissions
for the existing database. `REASSIGN OWNED BY kemal TO invidious;` may also be needed, it can be
run as `sudo -u postgres env psql --user=postgres --dbname=invidious -c 'reassign OWNED BY kemal to invidious;'`.
'';
}
];

View File

@ -51,7 +51,7 @@ to ensure that changes can be applied by changing the module's options.
In case the application serves multiple domains (those are checked with
[`$_SERVER['HTTP_HOST']`](https://www.php.net/manual/en/reserved.variables.server.php))
it's needed to add them to
[`services.nextcloud.config.extraTrustedDomains`](#opt-services.nextcloud.config.extraTrustedDomains).
[`services.nextcloud.extraOptions.trusted_domains`](#opt-services.nextcloud.extraOptions.trusted_domains).
Auto updates for Nextcloud apps can be enabled using
[`services.nextcloud.autoUpdateApps`](#opt-services.nextcloud.autoUpdateApps.enable).

View File

@ -9,6 +9,7 @@ let
jsonFormat = pkgs.formats.json {};
defaultPHPSettings = {
output_buffering = "0";
short_open_tag = "Off";
expose_php = "Off";
error_reporting = "E_ALL & ~E_DEPRECATED & ~E_STRICT";
@ -23,6 +24,43 @@ let
catch_workers_output = "yes";
};
appStores = {
# default apps bundled with pkgs.nextcloudXX, e.g. files, contacts
apps = {
enabled = true;
writable = false;
};
# apps installed via cfg.extraApps
nix-apps = {
enabled = cfg.extraApps != { };
linkTarget = pkgs.linkFarm "nix-apps"
(mapAttrsToList (name: path: { inherit name path; }) cfg.extraApps);
writable = false;
};
# apps installed via the app store.
store-apps = {
enabled = cfg.appstoreEnable == null || cfg.appstoreEnable;
linkTarget = "${cfg.home}/store-apps";
writable = true;
};
};
webroot = pkgs.runCommand
"${cfg.package.name or "nextcloud"}-with-apps"
{ }
''
mkdir $out
ln -sfv "${cfg.package}"/* "$out"
${concatStrings
(mapAttrsToList (name: store: optionalString (store.enabled && store?linkTarget) ''
if [ -e "$out"/${name} ]; then
echo "Didn't expect ${name} already in $out!"
exit 1
fi
ln -sfTv ${store.linkTarget} "$out"/${name}
'') appStores)}
'';
inherit (cfg) datadir;
phpPackage = cfg.phpPackage.buildEnv {
@ -45,7 +83,7 @@ let
occ = pkgs.writeScriptBin "nextcloud-occ" ''
#! ${pkgs.runtimeShell}
cd ${cfg.package}
cd ${webroot}
sudo=exec
if [[ "$USER" != nextcloud ]]; then
sudo='exec /run/wrappers/bin/sudo -u nextcloud --preserve-env=NEXTCLOUD_CONFIG_DIR --preserve-env=OC_PASS'
@ -94,6 +132,22 @@ in {
(mkRemovedOptionModule [ "services" "nextcloud" "disableImagemagick" ] ''
Use services.nextcloud.enableImagemagick instead.
'')
(mkRenamedOptionModule
[ "services" "nextcloud" "logLevel" ] [ "services" "nextcloud" "extraOptions" "loglevel" ])
(mkRenamedOptionModule
[ "services" "nextcloud" "logType" ] [ "services" "nextcloud" "extraOptions" "log_type" ])
(mkRenamedOptionModule
[ "services" "nextcloud" "config" "defaultPhoneRegion" ] [ "services" "nextcloud" "extraOptions" "default_phone_region" ])
(mkRenamedOptionModule
[ "services" "nextcloud" "config" "overwriteProtocol" ] [ "services" "nextcloud" "extraOptions" "overwriteprotocol" ])
(mkRenamedOptionModule
[ "services" "nextcloud" "skeletonDirectory" ] [ "services" "nextcloud" "extraOptions" "skeletondirectory" ])
(mkRenamedOptionModule
[ "services" "nextcloud" "globalProfiles" ] [ "services" "nextcloud" "extraOptions" "profile.enabled" ])
(mkRenamedOptionModule
[ "services" "nextcloud" "config" "extraTrustedDomains" ] [ "services" "nextcloud" "extraOptions" "trusted_domains" ])
(mkRenamedOptionModule
[ "services" "nextcloud" "config" "trustedProxies" ] [ "services" "nextcloud" "extraOptions" "trusted_proxies" ])
];
options.services.nextcloud = {
@ -157,32 +211,6 @@ in {
Set this to false to disable the installation of apps from the global appstore. App management is always enabled regardless of this setting.
'';
};
logLevel = mkOption {
type = types.ints.between 0 4;
default = 2;
description = lib.mdDoc ''
Log level value between 0 (DEBUG) and 4 (FATAL).
- 0 (debug): Log all activity.
- 1 (info): Log activity such as user logins and file activities, plus warnings, errors, and fatal errors.
- 2 (warn): Log successful operations, as well as warnings of potential problems, errors and fatal errors.
- 3 (error): Log failed operations and fatal errors.
- 4 (fatal): Log only fatal errors that cause the server to stop.
'';
};
logType = mkOption {
type = types.enum [ "errorlog" "file" "syslog" "systemd" ];
default = "syslog";
description = lib.mdDoc ''
Logging backend to use.
systemd requires the php-systemd package to be added to services.nextcloud.phpExtraExtensions.
See the [nextcloud documentation](https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/logging_configuration.html) for details.
'';
};
https = mkOption {
type = types.bool;
default = false;
@ -206,16 +234,6 @@ in {
'';
};
skeletonDirectory = mkOption {
default = "";
type = types.str;
description = lib.mdDoc ''
The directory where the skeleton files are located. These files will be
copied to the data directory of new users. Leave empty to not copy any
skeleton files.
'';
};
webfinger = mkOption {
type = types.bool;
default = false;
@ -315,7 +333,6 @@ in {
};
config = {
dbtype = mkOption {
type = types.enum [ "sqlite" "pgsql" "mysql" ];
@ -380,53 +397,6 @@ in {
setup of Nextcloud by the systemd service `nextcloud-setup.service`.
'';
};
extraTrustedDomains = mkOption {
type = types.listOf types.str;
default = [];
description = lib.mdDoc ''
Trusted domains from which the Nextcloud installation will be
accessible. You don't need to add
`services.nextcloud.hostname` here.
'';
};
trustedProxies = mkOption {
type = types.listOf types.str;
default = [];
description = lib.mdDoc ''
Trusted proxies to provide if the Nextcloud installation is being
proxied to secure against, e.g. spoofing.
'';
};
overwriteProtocol = mkOption {
type = types.nullOr (types.enum [ "http" "https" ]);
default = null;
example = "https";
description = lib.mdDoc ''
Force Nextcloud to always use HTTP or HTTPS i.e. for link generation.
Nextcloud uses the currently used protocol by default, but when
behind a reverse-proxy, it may use `http` for everything although
Nextcloud may be served via HTTPS.
'';
};
defaultPhoneRegion = mkOption {
default = null;
type = types.nullOr types.str;
example = "DE";
description = lib.mdDoc ''
An [ISO 3166-1](https://www.iso.org/iso-3166-country-codes.html)
country code which replaces automatic phone-number detection
without a country code.
As an example, with `DE` set as the default phone region,
the `+49` prefix can be omitted for phone numbers.
'';
};
objectstore = {
s3 = {
enable = mkEnableOption (lib.mdDoc ''
@ -609,30 +579,109 @@ in {
The nextcloud-occ program preconfigured to target this Nextcloud instance.
'';
};
globalProfiles = mkEnableOption (lib.mdDoc "global profiles") // {
description = lib.mdDoc ''
Makes user-profiles globally available under `nextcloud.tld/u/user.name`.
Even though it's enabled by default in Nextcloud, it must be explicitly enabled
here because it has the side-effect that personal information is even accessible to
unauthenticated users by default.
By default, the following properties are set to Show to everyone
if this flag is enabled:
- About
- Full name
- Headline
- Organisation
- Profile picture
- Role
- Twitter
- Website
Only has an effect in Nextcloud 23 and later.
'';
};
extraOptions = mkOption {
type = jsonFormat.type;
type = types.submodule {
freeformType = jsonFormat.type;
options = {
loglevel = mkOption {
type = types.ints.between 0 4;
default = 2;
description = lib.mdDoc ''
Log level value between 0 (DEBUG) and 4 (FATAL).
- 0 (debug): Log all activity.
- 1 (info): Log activity such as user logins and file activities, plus warnings, errors, and fatal errors.
- 2 (warn): Log successful operations, as well as warnings of potential problems, errors and fatal errors.
- 3 (error): Log failed operations and fatal errors.
- 4 (fatal): Log only fatal errors that cause the server to stop.
'';
};
log_type = mkOption {
type = types.enum [ "errorlog" "file" "syslog" "systemd" ];
default = "syslog";
description = lib.mdDoc ''
Logging backend to use.
systemd requires the php-systemd package to be added to services.nextcloud.phpExtraExtensions.
See the [nextcloud documentation](https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/logging_configuration.html) for details.
'';
};
skeletondirectory = mkOption {
default = "";
type = types.str;
description = lib.mdDoc ''
The directory where the skeleton files are located. These files will be
copied to the data directory of new users. Leave empty to not copy any
skeleton files.
'';
};
trusted_domains = mkOption {
type = types.listOf types.str;
default = [];
description = lib.mdDoc ''
Trusted domains, from which the nextcloud installation will be
accessible. You don't need to add
`services.nextcloud.hostname` here.
'';
};
trusted_proxies = mkOption {
type = types.listOf types.str;
default = [];
description = lib.mdDoc ''
Trusted proxies, to provide if the nextcloud installation is being
proxied to secure against e.g. spoofing.
'';
};
overwriteprotocol = mkOption {
type = types.enum [ "" "http" "https" ];
default = "";
example = "https";
description = lib.mdDoc ''
Force Nextcloud to always use HTTP or HTTPS i.e. for link generation.
Nextcloud uses the currently used protocol by default, but when
behind a reverse-proxy, it may use `http` for everything although
Nextcloud may be served via HTTPS.
'';
};
default_phone_region = mkOption {
default = "";
type = types.str;
example = "DE";
description = lib.mdDoc ''
An [ISO 3166-1](https://www.iso.org/iso-3166-country-codes.html)
country code which replaces automatic phone-number detection
without a country code.
As an example, with `DE` set as the default phone region,
the `+49` prefix can be omitted for phone numbers.
'';
};
"profile.enabled" = mkEnableOption (lib.mdDoc "global profiles") // {
description = lib.mdDoc ''
Makes user-profiles globally available under `nextcloud.tld/u/user.name`.
Even though it's enabled by default in Nextcloud, it must be explicitly enabled
here because it has the side-effect that personal information is even accessible to
unauthenticated users by default.
By default, the following properties are set to Show to everyone
if this flag is enabled:
- About
- Full name
- Headline
- Organisation
- Profile picture
- Role
- Twitter
- Website
Only has an effect in Nextcloud 23 and later.
'';
};
};
};
default = {};
description = lib.mdDoc ''
Extra options which should be appended to Nextcloud's config.php file.
@ -766,11 +815,10 @@ in {
# When upgrading the Nextcloud package, Nextcloud can report errors such as
# "The files of the app [all apps in /var/lib/nextcloud/apps] were not replaced correctly"
# Restarting phpfpm on Nextcloud package update fixes these issues (but this is a workaround).
phpfpm-nextcloud.restartTriggers = [ cfg.package ];
phpfpm-nextcloud.restartTriggers = [ webroot ];
nextcloud-setup = let
c = cfg.config;
writePhpArray = a: "[${concatMapStringsSep "," (val: ''"${toString val}"'') a}]";
requiresReadSecretFunction = c.dbpassFile != null || c.objectstore.s3.enable;
objectstoreConfig = let s3 = c.objectstore.s3; in optionalString s3.enable ''
'objectstore' => [
@ -800,6 +848,10 @@ in {
nextcloudGreaterOrEqualThan = req: versionAtLeast cfg.package.version req;
mkAppStoreConfig = name: { enabled, writable, ... }: optionalString enabled ''
[ 'path' => '${webroot}/${name}', 'url' => '/${name}', 'writable' => ${boolToString writable} ],
'';
overrideConfig = pkgs.writeText "nextcloud-config.php" ''
<?php
${optionalString requiresReadSecretFunction ''
@ -828,17 +880,10 @@ in {
}
$CONFIG = [
'apps_paths' => [
${optionalString (cfg.extraApps != { }) "[ 'path' => '${cfg.home}/nix-apps', 'url' => '/nix-apps', 'writable' => false ],"}
[ 'path' => '${cfg.home}/apps', 'url' => '/apps', 'writable' => false ],
[ 'path' => '${cfg.home}/store-apps', 'url' => '/store-apps', 'writable' => true ],
${concatStrings (mapAttrsToList mkAppStoreConfig appStores)}
],
${optionalString (showAppStoreSetting) "'appstoreenabled' => ${renderedAppStoreSetting},"}
'datadirectory' => '${datadir}/data',
'skeletondirectory' => '${cfg.skeletonDirectory}',
${optionalString cfg.caching.apcu "'memcache.local' => '\\OC\\Memcache\\APCu',"}
'log_type' => '${cfg.logType}',
'loglevel' => '${builtins.toString cfg.logLevel}',
${optionalString (c.overwriteProtocol != null) "'overwriteprotocol' => '${c.overwriteProtocol}',"}
${optionalString (c.dbname != null) "'dbname' => '${c.dbname}',"}
${optionalString (c.dbhost != null) "'dbhost' => '${c.dbhost}',"}
${optionalString (c.dbport != null) "'dbport' => '${toString c.dbport}',"}
@ -851,10 +896,6 @@ in {
''
}
'dbtype' => '${c.dbtype}',
'trusted_domains' => ${writePhpArray ([ cfg.hostName ] ++ c.extraTrustedDomains)},
'trusted_proxies' => ${writePhpArray (c.trustedProxies)},
${optionalString (c.defaultPhoneRegion != null) "'default_phone_region' => '${c.defaultPhoneRegion}',"}
${optionalString (nextcloudGreaterOrEqualThan "23") "'profile.enabled' => ${boolToString cfg.globalProfiles},"}
${objectstoreConfig}
];
@ -907,7 +948,7 @@ in {
(i: v: ''
${occ}/bin/nextcloud-occ config:system:set trusted_domains \
${toString i} --value="${toString v}"
'') ([ cfg.hostName ] ++ cfg.config.extraTrustedDomains));
'') ([ cfg.hostName ] ++ cfg.extraOptions.trusted_domains));
in {
wantedBy = [ "multi-user.target" ];
@ -935,17 +976,16 @@ in {
exit 1
fi
ln -sf ${cfg.package}/apps ${cfg.home}/
# Install extra apps
ln -sfT \
${pkgs.linkFarm "nix-apps"
(mapAttrsToList (name: path: { inherit name path; }) cfg.extraApps)} \
${cfg.home}/nix-apps
${concatMapStrings (name: ''
if [ -d "${cfg.home}"/${name} ]; then
echo "Cleaning up ${name}; these are now bundled in the webroot store-path!"
rm -r "${cfg.home}"/${name}
fi
'') [ "nix-apps" "apps" ]}
# create nextcloud directories.
# if the directories exist already with wrong permissions, we fix that
for dir in ${datadir}/config ${datadir}/data ${cfg.home}/store-apps ${cfg.home}/nix-apps; do
for dir in ${datadir}/config ${datadir}/data ${cfg.home}/store-apps; do
if [ ! -e $dir ]; then
install -o nextcloud -g nextcloud -d $dir
elif [ $(stat -c "%G" $dir) != "nextcloud" ]; then
@ -982,7 +1022,7 @@ in {
environment.NEXTCLOUD_CONFIG_DIR = "${datadir}/config";
serviceConfig.Type = "oneshot";
serviceConfig.User = "nextcloud";
serviceConfig.ExecStart = "${phpPackage}/bin/php -f ${cfg.package}/cron.php";
serviceConfig.ExecStart = "${phpPackage}/bin/php -f ${webroot}/cron.php";
};
nextcloud-update-plugins = mkIf cfg.autoUpdateApps.enable {
after = [ "nextcloud-setup.service" ];
@ -1043,22 +1083,25 @@ in {
user = "nextcloud";
};
services.nextcloud = lib.mkIf cfg.configureRedis {
caching.redis = true;
extraOptions = {
services.nextcloud = {
caching.redis = lib.mkIf cfg.configureRedis true;
extraOptions = mkMerge [({
datadirectory = lib.mkDefault "${datadir}/data";
trusted_domains = [ cfg.hostName ];
}) (lib.mkIf cfg.configureRedis {
"memcache.distributed" = ''\OC\Memcache\Redis'';
"memcache.locking" = ''\OC\Memcache\Redis'';
redis = {
host = config.services.redis.servers.nextcloud.unixSocket;
port = 0;
};
};
})];
};
services.nginx.enable = mkDefault true;
services.nginx.virtualHosts.${cfg.hostName} = {
root = cfg.package;
root = webroot;
locations = {
"= /robots.txt" = {
priority = 100;
@ -1075,14 +1118,6 @@ in {
}
'';
};
"~ ^/store-apps" = {
priority = 201;
extraConfig = "root ${cfg.home};";
};
"~ ^/nix-apps" = {
priority = 201;
extraConfig = "root ${cfg.home};";
};
"^~ /.well-known" = {
priority = 210;
extraConfig = ''

View File

@ -352,10 +352,11 @@ let
# The acme-challenge location doesn't need to be added if we are not using any automated
# certificate provisioning and can also be omitted when we use a certificate obtained via a DNS-01 challenge
acmeLocation = optionalString (vhost.enableACME || (vhost.useACMEHost != null && config.security.acme.certs.${vhost.useACMEHost}.dnsProvider == null)) ''
acmeLocation = optionalString (vhost.enableACME || (vhost.useACMEHost != null && config.security.acme.certs.${vhost.useACMEHost}.dnsProvider == null))
# Rule for legitimate ACME Challenge requests (like /.well-known/acme-challenge/xxxxxxxxx)
# We use ^~ here, so that we don't check any regexes (which could
# otherwise easily override this intended match accidentally).
''
location ^~ /.well-known/acme-challenge/ {
${optionalString (vhost.acmeFallbackHost != null) "try_files $uri @acme-fallback;"}
${optionalString (vhost.acmeRoot != null) "root ${vhost.acmeRoot};"}
@ -375,10 +376,11 @@ let
${concatMapStringsSep "\n" listenString redirectListen}
server_name ${vhost.serverName} ${concatStringsSep " " vhost.serverAliases};
${acmeLocation}
location / {
return ${toString vhost.redirectCode} https://$host$request_uri;
}
${acmeLocation}
}
''}
@ -392,13 +394,6 @@ let
http3 ${if vhost.http3 then "on" else "off"};
http3_hq ${if vhost.http3_hq then "on" else "off"};
''}
${acmeLocation}
${optionalString (vhost.root != null) "root ${vhost.root};"}
${optionalString (vhost.globalRedirect != null) ''
location / {
return ${toString vhost.redirectCode} http${optionalString hasSSL "s"}://${vhost.globalRedirect}$request_uri;
}
''}
${optionalString hasSSL ''
ssl_certificate ${vhost.sslCertificate};
ssl_certificate_key ${vhost.sslCertificateKey};
@ -421,6 +416,14 @@ let
${mkBasicAuth vhostName vhost}
${optionalString (vhost.root != null) "root ${vhost.root};"}
${optionalString (vhost.globalRedirect != null) ''
location / {
return ${toString vhost.redirectCode} http${optionalString hasSSL "s"}://${vhost.globalRedirect}$request_uri;
}
''}
${acmeLocation}
${mkLocations vhost.locations}
${vhost.extraConfig}
@ -1129,14 +1132,6 @@ in
'';
}
{
assertion = any (host: host.kTLS) (attrValues virtualHosts) -> versionAtLeast cfg.package.version "1.21.4";
message = ''
services.nginx.virtualHosts.<name>.kTLS requires nginx version
1.21.4 or above; see the documentation for services.nginx.package.
'';
}
{
assertion = all (host: !(host.enableACME && host.useACMEHost != null)) (attrValues virtualHosts);
message = ''
@ -1345,6 +1340,8 @@ in
nginx.gid = config.ids.gids.nginx;
};
boot.kernelModules = optional (versionAtLeast config.boot.kernelPackages.kernel.version "4.17") "tls";
# do not delete the default temp directories created upon nginx startup
systemd.tmpfiles.rules = [
"X /tmp/systemd-private-%b-nginx.service-*/tmp/nginx_*"

View File

@ -7,7 +7,7 @@ let
cfg = dmcfg.sddm;
xEnv = config.systemd.services.display-manager.environment;
sddm = pkgs.libsForQt5.sddm;
sddm = cfg.package;
iniFmt = pkgs.formats.ini { };
@ -108,6 +108,8 @@ in
'';
};
package = mkPackageOption pkgs [ "plasma5Packages" "sddm" ] {};
enableHidpi = mkOption {
type = types.bool;
default = true;

View File

@ -130,9 +130,9 @@ let cfg = config.services.xserver.libinput;
default = true;
description =
lib.mdDoc ''
Disables horizontal scrolling. When disabled, this driver will discard any horizontal scroll
events from libinput. Note that this does not disable horizontal scrolling, it merely
discards the horizontal axis from any scroll events.
Enables or disables horizontal scrolling. When disabled, this driver will discard any
horizontal scroll events from libinput. This does not disable horizontal scroll events
from libinput; it merely discards the horizontal axis from any scroll events.
'';
};

View File

@ -11,6 +11,7 @@
let
cfg = config.boot.bootspec;
children = lib.mapAttrs (childName: childConfig: childConfig.configuration.system.build.toplevel) config.specialisation;
hasAtLeastOneInitrdSecret = lib.length (lib.attrNames config.boot.initrd.secrets) > 0;
schemas = {
v1 = rec {
filename = "boot.json";
@ -27,6 +28,7 @@ let
label = "${config.system.nixos.distroName} ${config.system.nixos.codeName} ${config.system.nixos.label} (Linux ${config.boot.kernelPackages.kernel.modDirVersion})";
} // lib.optionalAttrs config.boot.initrd.enable {
initrd = "${config.system.build.initialRamdisk}/${config.system.boot.loader.initrdFile}";
} // lib.optionalAttrs hasAtLeastOneInitrdSecret {
initrdSecrets = "${config.system.build.initialRamdiskSecretAppender}/bin/append-initrd-secrets";
};
}));

View File

@ -1,6 +1,6 @@
{ config, lib, pkgs, ... }:
let
inherit (lib) mkOption mkDefault types optionalString stringAfter;
inherit (lib) mkOption mkDefault types optionalString;
cfg = config.boot.binfmt;

View File

@ -36,7 +36,7 @@ let
# Package set of targeted architecture
if cfg.forcei686 then pkgs.pkgsi686Linux else pkgs;
realGrub = if cfg.zfsSupport then grubPkgs.grub2.override { zfsSupport = true; }
realGrub = if cfg.zfsSupport then grubPkgs.grub2.override { zfsSupport = true; zfs = cfg.zfsPackage; }
else grubPkgs.grub2;
grub =
@ -614,6 +614,16 @@ in
'';
};
zfsPackage = mkOption {
type = types.package;
internal = true;
default = pkgs.zfs;
defaultText = literalExpression "pkgs.zfs";
description = lib.mdDoc ''
Which ZFS package to use if `config.boot.loader.grub.zfsSupport` is true.
'';
};
efiSupport = mkOption {
default = false;
type = types.bool;

View File

@ -20,13 +20,13 @@ from dataclasses import dataclass
class BootSpec:
init: str
initrd: str
initrdSecrets: str
kernel: str
kernelParams: List[str]
label: str
system: str
toplevel: str
specialisations: Dict[str, "BootSpec"]
initrdSecrets: str | None = None
@ -131,9 +131,8 @@ def write_entry(profile: str | None, generation: int, specialisation: str | None
specialisation=" (%s)" % specialisation if specialisation else "")
try:
subprocess.check_call([bootspec.initrdSecrets, "@efiSysMountPoint@%s" % (initrd)])
except FileNotFoundError:
pass
if bootspec.initrdSecrets is not None:
subprocess.check_call([bootspec.initrdSecrets, "@efiSysMountPoint@%s" % (initrd)])
except subprocess.CalledProcessError:
if current:
print("failed to create initrd secrets!", file=sys.stderr)

View File

@ -396,8 +396,7 @@ in {
ManagerEnvironment=${lib.concatStringsSep " " (lib.mapAttrsToList (n: v: "${n}=${lib.escapeShellArg v}") cfg.managerEnvironment)}
'';
"/lib/modules".source = "${modulesClosure}/lib/modules";
"/lib/firmware".source = "${modulesClosure}/lib/firmware";
"/lib".source = "${modulesClosure}/lib";
"/etc/modules-load.d/nixos.conf".text = concatStringsSep "\n" config.boot.initrd.kernelModules;

View File

@ -3,14 +3,18 @@
cfg = config.systemd.oomd;
in {
imports = [
(lib.mkRenamedOptionModule [ "systemd" "oomd" "enableUserServices" ] [ "systemd" "oomd" "enableUserSlices" ])
];
options.systemd.oomd = {
enable = lib.mkEnableOption (lib.mdDoc "the `systemd-oomd` OOM killer") // { default = true; };
# Fedora enables the first and third option by default. See the 10-oomd-* files here:
# https://src.fedoraproject.org/rpms/systemd/tree/acb90c49c42276b06375a66c73673ac351025597
# https://src.fedoraproject.org/rpms/systemd/tree/806c95e1c70af18f81d499b24cd7acfa4c36ffd6
enableRootSlice = lib.mkEnableOption (lib.mdDoc "oomd on the root slice (`-.slice`)");
enableSystemSlice = lib.mkEnableOption (lib.mdDoc "oomd on the system slice (`system.slice`)");
enableUserServices = lib.mkEnableOption (lib.mdDoc "oomd on all user services (`user@.service`)");
enableUserSlices = lib.mkEnableOption (lib.mdDoc "oomd on all user slices (`user@.slice`) and all user owned slices");
extraConfig = lib.mkOption {
type = with lib.types; attrsOf (oneOf [ str int bool ]);
@ -44,14 +48,24 @@ in {
users.groups.systemd-oom = { };
systemd.slices."-".sliceConfig = lib.mkIf cfg.enableRootSlice {
ManagedOOMSwap = "kill";
ManagedOOMMemoryPressure = "kill";
ManagedOOMMemoryPressureLimit = "80%";
};
systemd.slices."system".sliceConfig = lib.mkIf cfg.enableSystemSlice {
ManagedOOMSwap = "kill";
};
systemd.services."user@".serviceConfig = lib.mkIf cfg.enableUserServices {
ManagedOOMMemoryPressure = "kill";
ManagedOOMMemoryPressureLimit = "50%";
ManagedOOMMemoryPressureLimit = "80%";
};
systemd.slices."user-".sliceConfig = lib.mkIf cfg.enableUserSlices {
ManagedOOMMemoryPressure = "kill";
ManagedOOMMemoryPressureLimit = "80%";
};
systemd.user.units."slice" = lib.mkIf cfg.enableUserSlices {
text = ''
[Slice]
ManagedOOMMemoryPressure=kill
ManagedOOMMemoryPressureLimit=80%
'';
overrideStrategy = "asDropin";
};
};
}

View File

@ -667,6 +667,7 @@ in
# TODO FIXME See https://github.com/NixOS/nixpkgs/pull/99386#issuecomment-798813567. To not break people's bootloader and as probably not everybody would read release notes that thoroughly add inSystem.
boot.loader.grub = mkIf (inInitrd || inSystem) {
zfsSupport = true;
zfsPackage = cfgZfs.package;
};
services.zfs.zed.settings = {

View File

@ -80,10 +80,17 @@ with lib;
ACTION=="add|change", SUBSYSTEM=="input", ATTR{name}=="${cfg.device}", ATTR{device/speed}="${toString cfg.speed}", ATTR{device/sensitivity}="${toString cfg.sensitivity}"
'';
system.activationScripts.trackpoint =
''
${config.systemd.package}/bin/udevadm trigger --attr-match=name="${cfg.device}"
systemd.services.trackpoint = {
wantedBy = [ "sysinit.target" ] ;
before = [ "sysinit.target" "shutdown.target" ];
conflicts = [ "shutdown.target" ];
unitConfig.DefaultDependencies = false;
serviceConfig.Type = "oneshot";
serviceConfig.RemainAfterExit = true;
serviceConfig.ExecStart = ''
${config.systemd.package}/bin/udevadm trigger --attr-match=name="${cfg.device}
'';
};
})
(mkIf (cfg.emulateWheel) {

View File

@ -58,9 +58,17 @@ in {
systemd.services.lxd-agent = {
enable = true;
wantedBy = [ "multi-user.target" ];
before = [ "shutdown.target" ];
before = [ "shutdown.target" ] ++ lib.optionals config.services.cloud-init.enable [
"cloud-init.target" "cloud-init.service" "cloud-init-local.service"
];
conflicts = [ "shutdown.target" ];
path = [ pkgs.kmod pkgs.util-linux ];
path = [
pkgs.kmod
pkgs.util-linux
# allow `incus exec` to find system binaries
"/run/current-system/sw"
];
preStart = preStartScript;
@ -72,7 +80,6 @@ in {
Description = "LXD - agent";
Documentation = "https://documentation.ubuntu.com/lxd/en/latest";
ConditionPathExists = "/dev/virtio-ports/org.linuxcontainers.lxd";
Before = lib.optionals config.services.cloud-init.enable [ "cloud-init.target" "cloud-init.service" "cloud-init-local.service" ];
DefaultDependencies = "no";
StartLimitInterval = "60";
StartLimitBurst = "10";

View File

@ -85,35 +85,44 @@ in
};
};
###### wrappers activation script
system.activationScripts.vmwareWrappers =
lib.stringAfter [ "specialfs" "users" ]
''
mkdir -p "${parentWrapperDir}"
chmod 755 "${parentWrapperDir}"
# We want to place the tmpdirs for the wrappers to the parent dir.
wrapperDir=$(mktemp --directory --tmpdir="${parentWrapperDir}" wrappers.XXXXXXXXXX)
chmod a+rx "$wrapperDir"
${lib.concatStringsSep "\n" (vmwareWrappers)}
if [ -L ${wrapperDir} ]; then
# Atomically replace the symlink
# See https://axialcorps.com/2013/07/03/atomically-replacing-files-and-directories/
old=$(readlink -f ${wrapperDir})
if [ -e "${wrapperDir}-tmp" ]; then
rm --force --recursive "${wrapperDir}-tmp"
fi
ln --symbolic --force --no-dereference "$wrapperDir" "${wrapperDir}-tmp"
mv --no-target-directory "${wrapperDir}-tmp" "${wrapperDir}"
rm --force --recursive "$old"
else
# For initial setup
ln --symbolic "$wrapperDir" "${wrapperDir}"
fi
'';
# Services
systemd.services."vmware-wrappers" = {
description = "Create VMVare Wrappers";
wantedBy = [ "multi-user.target" ];
before = [
"vmware-authdlauncher.service"
"vmware-networks-configuration.service"
"vmware-networks.service"
"vmware-usbarbitrator.service"
];
after = [ "systemd-sysusers.service" ];
serviceConfig.Type = "oneshot";
serviceConfig.RemainAfterExit = true;
script = ''
mkdir -p "${parentWrapperDir}"
chmod 755 "${parentWrapperDir}"
# We want to place the tmpdirs for the wrappers to the parent dir.
wrapperDir=$(mktemp --directory --tmpdir="${parentWrapperDir}" wrappers.XXXXXXXXXX)
chmod a+rx "$wrapperDir"
${lib.concatStringsSep "\n" (vmwareWrappers)}
if [ -L ${wrapperDir} ]; then
# Atomically replace the symlink
# See https://axialcorps.com/2013/07/03/atomically-replacing-files-and-directories/
old=$(readlink -f ${wrapperDir})
if [ -e "${wrapperDir}-tmp" ]; then
rm --force --recursive "${wrapperDir}-tmp"
fi
ln --symbolic --force --no-dereference "$wrapperDir" "${wrapperDir}-tmp"
mv --no-target-directory "${wrapperDir}-tmp" "${wrapperDir}"
rm --force --recursive "$old"
else
# For initial setup
ln --symbolic "$wrapperDir" "${wrapperDir}"
fi
'';
};
systemd.services."vmware-authdlauncher" = {
description = "VMware Authentication Daemon";
serviceConfig = {

View File

@ -5,7 +5,7 @@
{ nixpkgs ? { outPath = (import ../lib).cleanSource ./..; revCount = 56789; shortRev = "gfedcba"; }
, stableBranch ? false
, supportedSystems ? [ "aarch64-linux" "x86_64-linux" ]
, limitedSupportedSystems ? [ "i686-linux" ]
, limitedSupportedSystems ? [ ]
}:
let
@ -90,6 +90,7 @@ in rec {
(onSystems ["x86_64-linux"] "nixos.tests.installer.btrfsSubvols")
(onSystems ["x86_64-linux"] "nixos.tests.installer.luksroot")
(onSystems ["x86_64-linux"] "nixos.tests.installer.lvm")
(onSystems ["x86_64-linux"] "nixos.tests.installer.separateBootZfs")
(onSystems ["x86_64-linux"] "nixos.tests.installer.separateBootFat")
(onSystems ["x86_64-linux"] "nixos.tests.installer.separateBoot")
(onSystems ["x86_64-linux"] "nixos.tests.installer.simpleLabels")
@ -167,6 +168,7 @@ in rec {
(onFullSupported "nixos.tests.xfce")
(onFullSupported "nixpkgs.emacs")
(onFullSupported "nixpkgs.jdk")
(onSystems ["x86_64-linux"] "nixpkgs.mesa_i686") # i686 sanity check + useful
["nixpkgs.tarball"]
# Ensure that nixpkgs-check-by-name is available in all release channels and nixos-unstable,

View File

@ -164,7 +164,7 @@ in {
btrbk-no-timer = handleTest ./btrbk-no-timer.nix {};
btrbk-section-order = handleTest ./btrbk-section-order.nix {};
budgie = handleTest ./budgie.nix {};
buildbot = handleTestOn [ "x86_64-linux" ] ./buildbot.nix {};
buildbot = handleTest ./buildbot.nix {};
buildkite-agents = handleTest ./buildkite-agents.nix {};
c2fmzq = handleTest ./c2fmzq.nix {};
caddy = handleTest ./caddy.nix {};
@ -257,6 +257,7 @@ in {
dolibarr = handleTest ./dolibarr.nix {};
domination = handleTest ./domination.nix {};
dovecot = handleTest ./dovecot.nix {};
drawterm = discoverTests (import ./drawterm.nix);
drbd = handleTest ./drbd.nix {};
dublin-traceroute = handleTest ./dublin-traceroute.nix {};
earlyoom = handleTestOn ["x86_64-linux"] ./earlyoom.nix {};
@ -786,6 +787,7 @@ in {
spark = handleTestOn [ "x86_64-linux" "aarch64-linux" ] ./spark {};
sqlite3-to-mysql = handleTest ./sqlite3-to-mysql.nix {};
sslh = handleTest ./sslh.nix {};
ssh-agent-auth = handleTest ./ssh-agent-auth.nix {};
ssh-audit = handleTest ./ssh-audit.nix {};
sssd = handleTestOn [ "x86_64-linux" "aarch64-linux" ] ./sssd.nix {};
sssd-ldap = handleTestOn [ "x86_64-linux" "aarch64-linux" ] ./sssd-ldap.nix {};

View File

@ -15,7 +15,7 @@
test-support.displayManager.auto.user = "alice";
virtualisation.anbox.enable = true;
boot.kernelPackages = pkgs.linuxPackages_5_15;
boot.kernelPackages = pkgs.linuxKernel.packages.linux_5_15;
virtualisation.memorySize = 2500;
};

View File

@ -112,10 +112,39 @@ in
bootspec = json.loads(machine.succeed("jq -r '.\"org.nixos.bootspec.v1\"' /run/current-system/boot.json"))
assert all(key in bootspec for key in ('initrd', 'initrdSecrets')), "Bootspec should contain initrd or initrdSecrets field when initrd is enabled"
assert 'initrd' in bootspec, "Bootspec should contain initrd field when initrd is enabled"
assert 'initrdSecrets' not in bootspec, "Bootspec should not contain initrdSecrets when there's no initrdSecrets"
'';
};
# Check that initrd secrets create corresponding entries in bootspec.
initrd-secrets = makeTest {
name = "bootspec-with-initrd-secrets";
meta.maintainers = with pkgs.lib.maintainers; [ raitobezarius ];
nodes.machine = {
imports = [ standard ];
environment.systemPackages = [ pkgs.jq ];
# It's probably the case, but we want to make it explicit here.
boot.initrd.enable = true;
boot.initrd.secrets."/some/example" = pkgs.writeText "example-secret" "test";
};
testScript = ''
import json
machine.start()
machine.wait_for_unit("multi-user.target")
machine.succeed("test -e /run/current-system/boot.json")
bootspec = json.loads(machine.succeed("jq -r '.\"org.nixos.bootspec.v1\"' /run/current-system/boot.json"))
assert 'initrdSecrets' in bootspec, "Bootspec should contain an 'initrdSecrets' field given there's an initrd secret"
'';
};
# Check that specialisations create corresponding entries in bootspec.
specialisation = makeTest {
name = "bootspec-with-specialisation";

View File

@ -104,5 +104,5 @@ import ./make-test-python.nix ({ pkgs, ... }: {
bbworker.fail("nc -z bbmaster 8011")
'';
meta.maintainers = with pkgs.lib.maintainers; [ ];
meta.maintainers = pkgs.lib.teams.buildbot.members;
})

58
nixos/tests/drawterm.nix Normal file
View File

@ -0,0 +1,58 @@
{ system, pkgs }:
let
tests = {
xorg = {
node = { pkgs, ... }: {
imports = [ ./common/user-account.nix ./common/x11.nix ];
services.xserver.enable = true;
services.xserver.displayManager.sessionCommands = ''
${pkgs.drawterm}/bin/drawterm -g 1024x768 &
'';
test-support.displayManager.auto.user = "alice";
};
systems = [ "x86_64-linux" "aarch64-linux" ];
};
wayland = {
node = { pkgs, ... }: {
imports = [ ./common/wayland-cage.nix ];
services.cage.program = "${pkgs.drawterm-wayland}/bin/drawterm";
};
systems = [ "x86_64-linux" ];
};
};
mkTest = name: machine:
import ./make-test-python.nix ({ pkgs, ... }: {
inherit name;
nodes = { "${name}" = machine; };
meta = with pkgs.lib.maintainers; {
maintainers = [ moody ];
};
enableOCR = true;
testScript = ''
@polling_condition
def drawterm_running():
machine.succeed("pgrep drawterm")
start_all()
machine.wait_for_unit("graphical.target")
drawterm_running.wait() # type: ignore[union-attr]
machine.wait_for_text("cpu")
machine.send_chars("cpu\n")
machine.wait_for_text("auth")
machine.send_chars("cpu\n")
machine.wait_for_text("ending")
machine.screenshot("out.png")
'';
});
mkTestOn = systems: name: machine:
if pkgs.lib.elem system systems then mkTest name machine
else { ... }: { };
in
builtins.mapAttrs (k: v: mkTestOn v.systems k v.node { inherit system; }) tests

View File

@ -29,7 +29,7 @@ import ./make-test-python.nix ({ pkgs, ... }:
name = "frr";
meta = with pkgs.lib.maintainers; {
maintainers = [ hexa ];
maintainers = [ ];
};
nodes = {

View File

@ -4,12 +4,11 @@ import ./make-test-python.nix ({ pkgs, ... }: {
maintainers = [ fgaz ];
};
nodes.machine = { config, pkgs, ... }: {
nodes.machine = { pkgs, ... }: {
imports = [
./common/x11.nix
];
services.xserver.enable = true;
sound.enable = true;
environment.systemPackages = [ pkgs.ft2-clone ];
};
@ -30,4 +29,3 @@ import ./make-test-python.nix ({ pkgs, ... }: {
machine.screenshot("screen")
'';
})

View File

@ -13,9 +13,9 @@ import ./make-test-python.nix ({ pkgs, lib, ... }:
'';
# ensure the directory to be monitored exists before incron is started
system.activationScripts.incronTest = ''
mkdir /test
'';
systemd.tmpfiles.settings.incron-test = {
"/test".d = { };
};
};
testScript = ''

View File

@ -53,5 +53,8 @@ in
with subtest("lxd-agent is started"):
machine.succeed("incus exec ${instance-name} systemctl is-active lxd-agent")
with subtest("lxd-agent has a valid path"):
machine.succeed("incus exec ${instance-name} -- bash -c 'true'")
'';
})

View File

@ -22,6 +22,7 @@
# lvm
separateBoot
separateBootFat
separateBootZfs
simple
simpleLabels
simpleProvided

View File

@ -878,6 +878,78 @@ in {
'';
};
# Same as the previous, but with ZFS /boot.
separateBootZfs = makeInstallerTest "separateBootZfs" {
extraInstallerConfig = {
boot.supportedFilesystems = [ "zfs" ];
};
extraConfig = ''
# Using by-uuid overrides the default of by-id, and is unique
# to the qemu disks, as they don't produce by-id paths for
# some reason.
boot.zfs.devNodes = "/dev/disk/by-uuid/";
networking.hostId = "00000000";
'';
createPartitions = ''
machine.succeed(
"flock /dev/vda parted --script /dev/vda -- mklabel msdos"
+ " mkpart primary ext2 1M 256MB" # /boot
+ " mkpart primary linux-swap 256MB 1280M"
+ " mkpart primary ext2 1280M -1s", # /
"udevadm settle",
"mkswap /dev/vda2 -L swap",
"swapon -L swap",
"mkfs.ext4 -L nixos /dev/vda3",
"mount LABEL=nixos /mnt",
# Use as many ZFS features as possible to verify that GRUB can handle them
"zpool create"
" -o compatibility=grub2"
" -O utf8only=on"
" -O normalization=formD"
" -O compression=lz4" # Activate the lz4_compress feature
" -O xattr=sa"
" -O acltype=posixacl"
" bpool /dev/vda1",
"zfs create"
" -o recordsize=1M" # Prepare activating the large_blocks feature
" -o mountpoint=legacy"
" -o relatime=on"
" -o quota=1G"
" -o filesystem_limit=100" # Activate the filesystem_limits features
" bpool/boot",
# Snapshotting the top-level dataset would trigger a bug in GRUB2: https://github.com/openzfs/zfs/issues/13873
"zfs snapshot bpool/boot@snap-1", # Prepare activating the livelist and bookmarks features
"zfs clone bpool/boot@snap-1 bpool/test", # Activate the livelist feature
"zfs bookmark bpool/boot@snap-1 bpool/boot#bookmark", # Activate the bookmarks feature
"zpool checkpoint bpool", # Activate the zpool_checkpoint feature
"mkdir -p /mnt/boot",
"mount -t zfs bpool/boot /mnt/boot",
"touch /mnt/boot/empty", # Activate zilsaxattr feature
"dd if=/dev/urandom of=/mnt/boot/test bs=1M count=1", # Activate the large_blocks feature
# Print out all enabled and active ZFS features (and some other stuff)
"sync /mnt/boot",
"zpool get all bpool >&2",
# Abort early if GRUB2 doesn't like the disks
"grub-probe --target=device /mnt/boot >&2",
)
'';
# umount & export bpool before shutdown
# this is a fix for "cannot import 'bpool': pool was previously in use from another system."
postInstallCommands = ''
machine.succeed("umount /mnt/boot")
machine.succeed("zpool export bpool")
'';
};
# zfs on / with swap
zfsroot = makeInstallerTest "zfs-root" {
extraInstallerConfig = {
@ -897,7 +969,7 @@ in {
createPartitions = ''
machine.succeed(
"flock /dev/vda parted --script /dev/vda -- mklabel msdos"
+ " mkpart primary 1M 100MB" # bpool
+ " mkpart primary 1M 100MB" # /boot
+ " mkpart primary linux-swap 100M 1024M"
+ " mkpart primary 1024M -1s", # rpool
"udevadm settle",
@ -909,20 +981,12 @@ in {
"zfs create -o mountpoint=legacy rpool/root/usr",
"mkdir /mnt/usr",
"mount -t zfs rpool/root/usr /mnt/usr",
"zpool create -o compatibility=grub2 bpool /dev/vda1",
"zfs create -o mountpoint=legacy bpool/boot",
"mkfs.vfat -n BOOT /dev/vda1",
"mkdir /mnt/boot",
"mount -t zfs bpool/boot /mnt/boot",
"mount LABEL=BOOT /mnt/boot",
"udevadm settle",
)
'';
# umount & export bpool before shutdown
# this is a fix for "cannot import 'bpool': pool was previously in use from another system."
postInstallCommands = ''
machine.succeed("umount /mnt/boot")
machine.succeed("zpool export bpool")
'';
};
# Create two physical LVM partitions combined into one volume group

View File

@ -13,10 +13,12 @@ in {
# The only thing the client needs to do is download a file.
client = { ... }: {
services.davfs2.enable = true;
system.activationScripts.davfs2-secrets = ''
echo "http://nextcloud/remote.php/dav/files/${adminuser} ${adminuser} ${adminpass}" > /tmp/davfs2-secrets
chmod 600 /tmp/davfs2-secrets
'';
systemd.tmpfiles.settings.nextcloud = {
"/tmp/davfs2-secrets"."f+" = {
mode = "0600";
argument = "http://nextcloud/remote.php/dav/files/${adminuser} ${adminuser} ${adminpass}";
};
};
virtualisation.fileSystems = {
"/mnt/dav" = {
device = "http://nextcloud/remote.php/dav/files/${adminuser}";

View File

@ -32,7 +32,6 @@ in {
adminpassFile = toString (pkgs.writeText "admin-pass-file" ''
${adminpass}
'');
trustedProxies = [ "::1" ];
};
notify_push = {
enable = true;
@ -42,6 +41,7 @@ in {
extraApps = {
inherit (pkgs."nextcloud${lib.versions.major config.services.nextcloud.package.version}Packages".apps) notify_push;
};
extraOptions.trusted_proxies = [ "::1" ];
};
services.redis.servers."nextcloud".enable = true;

View File

@ -1053,6 +1053,50 @@ let
'';
};
ping = {
exporterConfig = {
enable = true;
settings = {
targets = [ {
"localhost" = {
alias = "local machine";
env = "prod";
type = "domain";
};
} {
"127.0.0.1" = {
alias = "local machine";
type = "v4";
};
} {
"::1" = {
alias = "local machine";
type = "v6";
};
} {
"google.com" = {};
} ];
dns = {};
ping = {
interval = "2s";
timeout = "3s";
history-size = 42;
payload-size = 56;
};
log = {
level = "warn";
};
};
};
exporterTest = ''
wait_for_unit("prometheus-ping-exporter.service")
wait_for_open_port(9427)
succeed("curl -sSf http://localhost:9427/metrics | grep 'ping_up{.*} 1'")
'';
};
postfix = {
exporterConfig = {
enable = true;

View File

@ -39,8 +39,8 @@ import ./make-test-python.nix ({ pkgs, ...} : {
with subtest("squeezelite player successfully connects to slimserver"):
machine.wait_for_unit("squeezelite.service")
machine.wait_until_succeeds("journalctl -u squeezelite.service | grep 'slimproto:937 connected'")
player_mac = machine.wait_until_succeeds("journalctl -eu squeezelite.service | grep 'sendHELO:148 mac:'").strip().split(" ")[-1]
machine.wait_until_succeeds("journalctl -u squeezelite.service | grep -E 'slimproto:[0-9]+ connected'")
player_mac = machine.wait_until_succeeds("journalctl -eu squeezelite.service | grep -E 'sendHELO:[0-9]+ mac:'").strip().split(" ")[-1]
player_id = machine.succeed(f"curl http://localhost:9000/jsonrpc.js -g -X POST -d '{json.dumps(rpc_get_player)}'")
assert player_mac == json.loads(player_id)["result"]["_id"], "squeezelite player not found"
'';

View File

@ -0,0 +1,51 @@
import ./make-test-python.nix ({ lib, pkgs, ... }:
let
inherit (import ./ssh-keys.nix pkgs) snakeOilPrivateKey snakeOilPublicKey;
in {
name = "ssh-agent-auth";
meta.maintainers = with lib.maintainers; [ nicoo ];
nodes = let nodeConfig = n: { ... }: {
users.users = {
admin = {
isNormalUser = true;
extraGroups = [ "wheel" ];
openssh.authorizedKeys.keys = [ snakeOilPublicKey ];
};
foo.isNormalUser = true;
};
security.pam.enableSSHAgentAuth = true;
security.${lib.replaceStrings [ "_" ] [ "-" ] n} = {
enable = true;
wheelNeedsPassword = true; # We are checking `pam_ssh_agent_auth(8)` works for a sudoer
};
# Necessary for pam_ssh_agent_auth >_>'
services.openssh.enable = true;
};
in lib.genAttrs [ "sudo" "sudo_rs" ] nodeConfig;
testScript = let
privateKeyPath = "/home/admin/.ssh/id_ecdsa";
userScript = pkgs.writeShellScript "test-script" ''
set -e
ssh-add -q ${privateKeyPath}
# faketty needed to ensure `sudo` doesn't write to the controlling PTY,
# which would break the test-driver's line-oriented protocol.
${lib.getExe pkgs.faketty} sudo -u foo -- id -un
'';
in ''
for vm in (sudo, sudo_rs):
sudo_impl = vm.name.replace("_", "-")
with subtest(f"wheel user can auth with ssh-agent for {sudo_impl}"):
vm.copy_from_host("${snakeOilPrivateKey}", "${privateKeyPath}")
vm.succeed("chmod -R 0700 /home/admin")
vm.succeed("chown -R admin:users /home/admin")
# Run `userScript` in an environment with an SSH-agent available
assert vm.succeed("sudo -u admin -- ssh-agent ${userScript} 2>&1").strip() == "foo"
'';
}
)

View File

@ -134,7 +134,7 @@ import ./make-test-python.nix ({ pkgs, lib, ... }: {
machine.wait_for_file("/tmp/sway-ipc.sock")
# Test XWayland (foot does not support X):
swaymsg("exec WINIT_UNIX_BACKEND=x11 WAYLAND_DISPLAY=invalid alacritty")
swaymsg("exec WINIT_UNIX_BACKEND=x11 WAYLAND_DISPLAY= alacritty")
wait_for_window("alice@machine")
machine.send_chars("test-x11\n")
machine.wait_for_file("/tmp/test-x11-exit-ok")

View File

@ -210,6 +210,7 @@ in {
enableSystemdStage1 = true;
};
installerBoot = (import ./installer.nix { }).separateBootZfs;
installer = (import ./installer.nix { }).zfsroot;
expand-partitions = makeTest {

View File

@ -19,7 +19,7 @@
stdenv.mkDerivation rec {
pname = "contrast";
version = "0.0.8";
version = "0.0.10";
src = fetchFromGitLab {
domain = "gitlab.gnome.org";
@ -27,13 +27,13 @@ stdenv.mkDerivation rec {
owner = "design";
repo = "contrast";
rev = version;
hash = "sha256-5OFmLsP+Xk3sKJcUG/s8KwedvfS8ri+JoinliyJSmrY=";
hash = "sha256-Y0CynBvnCOBesONpxUicR7PgMJgmM0ZQX/uOwIppj7w=";
};
cargoDeps = rustPlatform.fetchCargoTarball {
inherit src;
name = "${pname}-${version}";
hash = "sha256-8WukhoKMyApkwqPQ6KeWMsL40sMUcD4I4l7UqXf2Ld0=";
hash = "sha256-BdwY2YDJyDApGgE0Whz3xRU/0gRbkwbKUvPbWEObXE8=";
};
nativeBuildInputs = [

View File

@ -1,4 +1,4 @@
{ lib, stdenv, fetchurl, libcdio-paranoia, cddiscid, wget, which, vorbis-tools, id3v2, eyeD3
{ lib, stdenv, fetchurl, libcdio-paranoia, cddiscid, wget, which, vorbis-tools, id3v2, eyed3
, lame, flac, glyr
, perlPackages
, makeWrapper }:
@ -40,7 +40,7 @@ in
--prefix PERL5LIB : "$PERL5LIB" \
--prefix PATH ":" ${lib.makeBinPath [
"$out" which libcdio-paranoia cddiscid wget
vorbis-tools id3v2 eyeD3 lame flac glyr
vorbis-tools id3v2 eyed3 lame flac glyr
]}
done
'';

View File

@ -64,14 +64,14 @@
}:
stdenv.mkDerivation rec {
pname = "ardour";
version = "8.1";
version = "8.2";
# We can't use `fetchFromGitea` here, as attempting to fetch release archives from git.ardour.org
# result in an empty archive. See https://tracker.ardour.org/view.php?id=7328 for more info.
src = fetchgit {
url = "git://git.ardour.org/ardour/ardour.git";
rev = version;
hash = "sha256-T1o1E5+974dNUwEFW/Pw0RzbGifva2FdJPrCusWMk0E=";
hash = "sha256-Ito1gy7k7nzTN7Co/ddXYbAvobiZO0V0J5uymsm756k=";
};
bundledContent = fetchzip {
@ -169,7 +169,12 @@ stdenv.mkDerivation rec {
"--ptformat"
"--run-tests"
"--test"
"--use-external-libs"
# since we don't have https://github.com/agfline/LibAAF yet,
# we need to use some of ardours internal libs, see:
# https://discourse.ardour.org/t/ardour-8-2-released/109615/6
# and
# https://discourse.ardour.org/t/ardour-8-2-released/109615/8
# "--use-external-libs"
] ++ lib.optional optimize "--optimize";
postInstall = ''

View File

@ -104,7 +104,17 @@ stdenv.mkDerivation rec {
patches = [
./option-debugging.patch
# ffmpeg 6 fix https://github.com/cmus/cmus/pull/1254/
(fetchpatch { url = "https://github.com/cmus/cmus/commit/07b368ff1500e1d2957cad61ced982fa10243fbc.patch"; hash = "sha256-5gsz3q8R9FPobHoLj8BQPsa9s4ULEA9w2VQR+gmpmgA="; })
(fetchpatch {
name = "ffmpeg-6-compat.patch";
url = "https://github.com/cmus/cmus/commit/07b368ff1500e1d2957cad61ced982fa10243fbc.patch";
hash = "sha256-5gsz3q8R9FPobHoLj8BQPsa9s4ULEA9w2VQR+gmpmgA=";
})
# function detection breaks with clang 16
(fetchpatch {
name = "clang-16-function-detection.patch";
url = "https://github.com/cmus/cmus/commit/4123b54bad3d8874205aad7f1885191c8e93343c.patch";
hash = "sha256-YKqroibgMZFxWQnbmLIHSHR5sMJduyEv6swnKZQ33Fg=";
})
];
nativeBuildInputs = [ pkg-config ];

View File

@ -57,7 +57,7 @@ python3Packages.buildPythonApplication rec {
propagatedBuildInputs = with python3Packages; [
pygobject3
eyeD3
eyed3
pillow
mutagen
pytaglib

View File

@ -15,14 +15,14 @@
}:
stdenv.mkDerivation (finalAttrs: {
pname = "g4music";
version = "3.3";
version = "3.4-1";
src = fetchFromGitLab {
domain = "gitlab.gnome.org";
owner = "neithern";
repo = "g4music";
rev = "v${finalAttrs.version}";
hash = "sha256-sajA8+G1frQA0p+8RK84hvh2P36JaarmSZx/sxMoFqo=";
hash = "sha256-uklgxhyrnFQSUcttXvYQtm2BybRkdTK1IfaRpOp0sOE=";
};
nativeBuildInputs = [

View File

@ -60,7 +60,7 @@ python3Packages.buildPythonApplication rec {
mygpoclient
requests
pygobject3
eyeD3
eyed3
podcastparser
html5lib
mutagen

View File

@ -5,14 +5,14 @@
stdenv.mkDerivation rec {
pname = "helio-workstation";
version = "3.11";
version = "3.12";
src = fetchFromGitHub {
owner = "helio-fm";
repo = pname;
rev = version;
fetchSubmodules = true;
sha256 = "sha256-ec4ueg6TNo3AaZ81j01OQZzhgOfSzG1/Vd0QhEXOUl0=";
sha256 = "sha256-U5F78RlM6+R+Ms00Z3aTh3npkbgL+FhhFtc9OpGvbdY=";
};
buildInputs = [

View File

@ -5,18 +5,18 @@
python3.pkgs.buildPythonPackage rec {
pname = "ledfx";
version = "2.0.80";
version = "2.0.86";
pyproject= true;
src = fetchPypi {
inherit pname version;
hash = "sha256-vwLk3EpXqUSAwzY2oX0ZpXrmH2cT0GdYdL/Mifav6mU=";
hash = "sha256-miOGMsrvK3A3SYnd+i/lqB+9GOHtO4F3RW8NkxDgFqU=";
};
postPatch = ''
substituteInPlace setup.py \
--replace "'rpi-ws281x>=4.3.0; platform_system == \"Linux\"'," "" \
--replace "sentry-sdk==1.14.0" "sentry-sdk" \
--replace "sentry-sdk==1.38.0" "sentry-sdk" \
--replace "~=" ">="
'';
@ -32,12 +32,14 @@ python3.pkgs.buildPythonPackage rec {
cython
flux-led
icmplib
mss
multidict
numpy
openrgb-python
paho-mqtt
pillow
psutil
pybase64
pyserial
pystray
python-mbedtls

View File

@ -0,0 +1,41 @@
{ lib
, qt5
, stdenv
, git
, fetchFromGitHub
, cmake
, alsa-lib
, qttools
}:
stdenv.mkDerivation rec {
pname = "lpd8editor";
version = "0.0.16";
src = fetchFromGitHub {
owner = "charlesfleche";
repo = "lpd8editor";
rev = "v${version}";
hash = "sha256-lRp2RhNiIf1VrryfKqYFSbKG3pktw3M7B49fXVoj+C8=";
};
buildInputs = [
qttools
alsa-lib
];
nativeBuildInputs = [
cmake
git
qt5.wrapQtAppsHook
];
meta = with lib; {
description = "A linux editor for the Akai LPD8";
homepage = "https://github.com/charlesfleche/lpd8editor";
license = licenses.mit;
maintainers = with maintainers; [ pinpox ];
mainProgram = "lpd8editor";
platforms = platforms.all;
};
}

Some files were not shown because too many files have changed in this diff Show More