nix-prefetch-git is either run as part of a build, usually sandboxed,
or outside a build, unsandboxed, to prefetch something that will later
be used in a build. It's important that the latter use produces
hashes that can be reproduced by the former.
One way that they can differ is if the user's git config does
something that changes the result of git clone. I ran into this,
because my global git config automatically enables git-lfs, whereas
nix-prefetch-git otherwise only uses git-lfs if specifically
requested. This led to very confusing hash mismatches.
The default just recently changed in 23.11. Users that had
swraid enabled implicitly by NixOS in previous releases got surprised
by warnings even though they do not actually use software RAID.
Fixes#254807
PR #155414 introduced an option to support enabling the FCC unlock
scripts that ModemManager provides, but since 1.18.4 doesn't execute
anymore.
However, this option is specifically only about the unlock scripts
provided with ModemManager so far. Rename the option to make this more
obvious.
Clarify that the monochrome font is not included, per #221181.
The new name is also coherent with the name of the font,
according to `fontconfig`: Noto Color Emoji.
For NVLink topology systems we need fabricmanager. Fabricmanager itself is
dependent on the datacenter driver set and not the regular x11 ones, it is also
tightly tied to the driver version. Furhtermore the current cudaPackages
defaults to version 11.8, which corresponds to the 520 datacenter drivers.
Future improvement should be to switch the main nvidia datacenter driver version
on the `config.cudaVersion` since these are well known from:
> https://docs.nvidia.com/deploy/cuda-compatibility/index.html#use-the-right-compat-package
This adds nixos configuration options `hardware.nvidia.datacenter.enable` and
`hardware.nvidia.datacenter.settings` (the settings configure fabricmanager)
Other interesting external links related to this commit are:
* Fabricmanager download site:
- https://developer.download.nvidia.com/compute/cuda/redist/fabricmanager/linux-x86_64/
* Data Center drivers:
- https://www.nvidia.com/Download/driverResults.aspx/193711/en-us/
Implementation specific details:
* Fabricmanager is added as a passthru package, similar to settings and
presistenced.
* Adds `use{Settings,Persistenced,Fabricmanager}` with defaults to preserve x11
expressions.
* Utilizes mkMerge to split the `hardware.nvidia` module into three comment
delimited sections:
1. Common
2. X11/xorg
3. Data Center
* Uses asserts to make the configurations mutualy exclusive.
Notes:
* Data Center Drivers are `x86_64` only.
* Reuses the `nvidia_x11` attribute in nixpkgs on enable, e.g. doesn't change it
to `nvidia_driver` and sets that to either `nvidia_x11` or `nvidia_dc`.
* Should have a helper function which is switched on `config.cudaVersion` like
`selectHighestVersion` but rather `selectCudaCompatibleVersion`.
My system does not use `bcache` and I sould prever my `systemPackages`
not to have bcache tools.
The change does not change the default but proviced usual `enable` knob.
Add new command `nixos-rebuild list-generations`. It will show an output
like
```
$ nixos-rebuild list-generations
Generation Build-date NixOS version Kernel Configuration Revision Specialisations
52 (current) Fri 2023-08-18 08:17:27 23.11.20230817.0f46300 6.4.10 448160aeccf6a7184bd8a84290d527819f1c552c *
51 Mon 2023-08-07 17:56:41 23.11.20230807.31b1eed 6.4.8 99ef480007ca51e3d440aa4fa6558178d63f9c42 *
```
This also mentions the change in the upcoming release notes
fixes#232505
Implements the new option `security.acme.maxConcurrentRenewals` to limit
the number of certificate generation (or renewal) jobs that can run in
parallel. This avoids overloading the system resources with many
certificates or running into acme registry rate limits and network
timeouts.
Architecture considerations:
- simplicity, lightweight: Concerns have been voiced about making this
already rather complex module even more convoluted. Additionally,
locking solutions shall not significantly increase performance and
footprint of individual job runs.
To accomodate these concerns, this solution is implemented purely in
Nix, bash, and using the light-weight `flock` util. To reduce
complexity, jobs are already assigned their lockfile slot at system
build time instead of dynamic locking and retrying. This comes at the
cost of not always maxing out the permitted concurrency at runtime.
- no stale locks: Limiting concurrency via locking mechanism is usually
approached with semaphores. Unfortunately, both SysV as well as
POSIX-Semaphores are *not* released when the process currently locking
them is SIGKILLed. This poses the danger of stale locks staying around
and certificate renewal being blocked from running altogether.
`flock` locks though are released when the process holding the file
descriptor of the lock file is KILLed or terminated.
- lockfile generation: Lock files could either be created at build time
in the Nix store or at script runtime in a idempotent manner.
While the latter would be simpler to achieve, we might exceed the number
of permitted concurrent runs during a system switch: Already running
jobs are still locked on the existing lock files, while jobs started
after the system switch will acquire locks on freshly created files,
not being blocked by the still running services.
For this reason, locks are generated and managed at runtime in the
shared state directory `/var/lib/locks/`.
nixos/security/acme: move locks to /run
also, move over permission and directory management to systemd-tmpfiles
nixos/security/acme: fix some linter remarks in my code
there are some remarks left for existing code, not touching that
nixos/security/acme: redesign script locking flow
- get rid of subshell
- provide function for wrapping scripts in a locked environment
nixos/acme: improve visibility of blocking on locks
nixos/acme: add smoke test for concurrency limitation
heavily inspired by m1cr0man
nixos/acme: release notes entry on new concurrency limits
nixos/acme: cleanup, clarifications
This avoids the possible confusion with `passwordFile` being the file
version of `password`, while it should contain the password hash.
Fixes issue #165858.
This patch packages mu4e as an Emacs lisp package based on the mu4e
output of the multiple-output package mu, which makes mu4e a good
citizen of Emacs lisp packages in two aspects.
First, mu4e now utilizes the Emacs lisp package infrastructure in
Nixpkgs. This allows users who want to do AOT native compilation for
non-default Emacs variants[0] to build only mu4e itself instead of the
whole mu package[1].
Second, mu4e now conforms to the Emacs builtin package manager[2].
Without this patch, mu4e autoloaded commands do not work
out-of-the-box[3] because its directory is added to load-path by
site-start.el after the initialization of package-directory-list,
which causes package-activate-all to not load mu4e-autoloads.el. This
patch fixes this issue when mu4e is installed to Emacs using the
withPackages wrapper[4].
[0]: such as emacs-pgtk
[1]: mu.override { emacs = emacs-pgtk; }
[2]: package.el
[3]: either (require 'mu4e) or (require 'mu4e-autoloads) is needed to
be called before an autoloaded command is called
[4]: emacs-pgtk.pkgs.withPackages (epkgs: [ epkgs.mu4e ])
The free version of Aseprite has a maintained fork, LibreSprite which is
already packaged in nixpkgs. The only really useful version of Aseprite
vs LibreSprite is the unfree version, and the free version will never
receive updates.
password-store.el is on MELPA so it is available in Nixpkgs as
emacs.pkgs.password-store.
Using emacs.pkgs.password-store is preferred because of better package
quality:
- Emacs lisp package dependencies are automatically installed
- byte-compilation is done
- native-compilation is done
We should sometimes restart the units rather than reloading them so the
changes are actually applied. / and /nix are explicitly excluded because
there was some very old issue where these were unmounted. I don't think
this will affect many people since most people use fstab mounts instead
but I plan to adapt this behavior for fstab mounts as well in the future
(once I wrote a test for the fstab thingies).