Xen is a trademark of the Cloud Software Group; we're not packaging
Xen(Server), we're packaging the Xen Project Hypervisor, which is open
source and owned by the Linux Foundation.
This is based on advice from Kelly Choi, the Xen Project Community
Manager, who has assisted us in the branding aspects of pacakaging.
Signed-off-by: Fernando Rodrigues <alpha@sigmasquadron.net>
We currently package all CUDA versions from 10.0 onwards. In
some cases, CUDA is the only thing preventing us from removing old
versions of GCC. Since we currently don’t deprecate or remove CUDA
versions, this will be an increasing drag on compiler maintenance in
Nixpkgs going forward unless we establish a sensible policy. After
discussing this with @SomeoneSerge in the context of old versions
of GCC, I learned that there was already a desire to remove at least
versions prior to 11.3, as those versions were only packaged in the
old “runfile” format, but that it was blocked on someone doing
the work to warn about the upcoming deprecation for a release cycle.
This change adds a release note and warnings indicating that CUDA 10.x
and 11.x will be removed in Nixpkgs 25.05, about 8 months from now.
I chose this version cut‐off because these versions of CUDA require
GCC < 12. GCC releases a major version every year, and seems to
support about four releases at a time, releasing the last update to
the oldest version and marking it as unsupported on their site around
the time of the release of the next major version. Therefore, by the
time of the 25.05 release, we should expect GCC 15 to be released
and GCC 11 to become unsupported. Adding a warning and communicating
the policy of only shipping CUDA versions that work with supported
compilers in the release notes means that we should be able to
clean up old versions as required without any issue or extensive
deprecation period in future, without obligating us to do so if there
is a strongly compelling reason to be more lenient. That should help
solve both shipping an indefinitely‐growing list of CUDA versions
and an indefinitely‐growing list of GCC and LLVM versions.
As I’m not a user of CUDA myself, I can’t be sure of how sensible
this version support policy is, but I think it’s fair to say that
it’s reasonable for Nixpkgs to choose not to maintain compiler
versions that are unsupported upstream just for the sake of versions
of CUDA that are also unmaintained. CUDA 11.x has not received an
update for two years already, and would only become unsupported in
Nixpkgs in over half a year’s time.
CUDA 10.x is currently unused in‐tree except for the unmaintained
Caffe and NVIDIA DCGM, which depends on multiple CUDA versions solely
so that it can provide plugins for those versions. The latest DCGM
version has already removed support for CUDA 10.x and is just awaiting
an update in Nixpkgs. They maintain a list of supported versions to
build plugins for in their CMake build system, so it should be simple
enough for us to only build support for the versions of CUDA that we
support in Nixpkgs.
From what I can tell, CUDA 11.x is currently used by the following
packages other than DCGM:
* `catboost`, because of
<https://github.com/catboost/catboost/issues/2540>. It looks like
upstream has since redesigned this part of their build system, so
perhaps the problem is no longer present, or would be easier to fix.
* `magma_2_6_2`, an old version from before upstream added CUDA
12 support. This seems okay to break to me; that version is not
maintained and will never be updated for new CUDA versions, and
the CUDA support is optional.
* `paddlepaddle`, which, uh, also requires OpenSSL 1.1 of all
things. <https://github.com/PaddlePaddle/Paddle/issues/67571>
states that PaddlePaddle supports up to 12.3.
* `python3Packages.cupy`, which is listed as “possibly incompatible
with cutensor 2.0 that comes with `cudaPackages_12`”. I’m
not sure what the “possibly” means here, but according to
<https://github.com/cupy/cupy/tree/v13.3.0?tab=readme-ov-file#installation>
they ship binary wheels using CUDA 12.x so I think this should
be fine.
* `python3Packages.tensorrt`, which supports CUDA 12.x going by
<https://github.com/NVIDIA/TensorRT/blob/release/10.4/CMakeLists.txt#L111>.
* TensorFlow, which has a link to
<https://www.tensorflow.org/install/source#gpu> above the
`python3Packages.tensorflow-bin` definition, but that page lists
the versions we package as supporting CUDA 12.x.
Given the years since CUDA 11.x received any update upstream, and the
seemingly very limited set of packages that truly require it, I think
the policy of being able to drop versions that require unsupported
compilers starting from the next Nixpkgs release is a reasonable
one, but of course I’m open to feedback from the CUDA maintainers
about this.
The package has been updated to 0.4 which will result in an auto-migration of the config. This updates our config to match the new expected format. Assertions have been added to warn users that they need to migrate their configuration.
The GUI of GlobalProtect-openconnect is unfree software, while the CLI is
licensed as GPLv3-only. This packaging work focuses on the CLI, and
components required for the CLI.
Link: https://github.com/yuezk/GlobalProtect-openconnect
Signed-off-by: Rahul Rameshbabu <sergeantsagara@protonmail.com>
The 1.x iteration of globalprotect-openconnect is no longer being
developed. Remove related components from nixpkgs.
Signed-off-by: Rahul Rameshbabu <sergeantsagara@protonmail.com>
- Cleans up downstream systemd units in favour of using upstream units.
- Xen 4.18 on Nixpkgs now supports EFI booting, so we have an EFI boot
builder here that runs after systemd-boot-builder.py.
- Add more options for setting up dom0 resource limits.
- Adds options for the declarative configuration of oxenstored.
- Disables the automatic bridge configuration, as it was broken.
- Drops legacy BIOS boot
- Adds an EFI boot entry builder script.
Signed-off-by: Fernando Rodrigues <alpha@sigmasquadron.net>
Co-authored-by: Yaroslav Bolyukin <iam@lach.pw>
Using zfs.latestCompatibleLinuxPackages can result in downgrades to the kernel on a system, potentially causing breakage.
This breakage may not be apparent during build and switch, but only after attempting to reboot into the updated generation.
By forcing users to explicitly manage their kernel version, we can ensure that the breakage will be apparent at build time instead.
Also recommends the usage of sudo's -E flag if --use-remote-sudo cannot
be used. This should still be discouraged IMO, as it means Nix may write
root-owned files to the user's home directory.
Signed-off-by: Fernando Rodrigues <alpha@sigmasquadron.net>
After a discussion on Matrix, it has become clear that building as root
is discouraged, and the (inappropriately named) --use-remote-sudo flag
should be enouraged as the de-facto way to selectively escalate to root
after a system build has finished.
Signed-off-by: Fernando Rodrigues <alpha@sigmasquadron.net>
Since `connectionStringFile` reads the file and puts it into the
invocation of the exporter, it's part of the cmdline and thus
effectively world-readable.
Added a new `connectionEnvFile` which is supposed to be an environment
file of the form
PGBOUNCER_EXPORTER_CONNECTION_STRING=...
that will be added to the systemd service. The exporter will read the
connection string from that value.
These take up 2 GiB every time anything in the minimal installer
changes, or up to 4 GiB per day. We already stopped building Amazon
images in 9426d90c67. Meaningful
installer changes are rare enough, and the couple of days it takes
for them to trickle down to the large channel acceptable enough,
that this is mostly a waste of space.
This should buy enough slack to build `stdenv` on `staging` without
contributing to cache size growth.
The Rust `switch-to-configuration-ng` rewrite was carefully written
to be compatible with the original Perl script, has been checked
against NixOS VM tests, and has been available on an opt‐in basis
for testing for the 24.05 release cycle.
The next step towards replacing the Perl script entirely is to
switch it on by default so that we can get real‐world testing from
a much greater number of users. Maintaining two implementations in
parallel is becoming a burden; we are having to adjust the systemd
service activation behaviour slightly to fix a long‐standing bug,
and backporting the changes to the Perl script is an unpleasant
process. We will do it anyway to ensure that the Rust and Perl
implementations keep parity with each other throughout the 24.11
release cycle, but we think the time has come to flip the switch.
Taking this step now will give us two to three months to test this in
the wild before the 24.11 release and gain confidence that there are
no regressions. If any non‐trivial problems arise before the final
release, we will revert to the Perl implementation by default. Doing
this switch ASAP will help to disentangle any problems that might
arise from the Rust implementation from problems that arise from the
systemd service activation changes, or the upcoming switch to using
systemd in stage 1 by default.
The main concern that was raised about replacing the Perl script in the
PR that added `switch-to-configuration-ng` was that it is currently
possible to run NixOS on systems that cannot natively host a Rust
compiler. This does not apply to any platforms that have official
support from NixOS, and as far as I know we do not know of any such
systems with users that are not cross‐compiling anyway.
My understanding is that these systems are already broken by default
anyway, as `systemd.shutdownRamfs.enable` is on by default and uses
`make-initrd-ng`, which is also written in Rust. Switching the default
while keeping the Perl implementation around will give us at least
an entire release cycle to find out if there are any users that will
be affected by this and decide what to do about it if so.
There is currently one known inconsistency between
the Perl and Rust implementations, as documented in
<https://github.com/NixOS/nixpkgs/issues/312297>; the Rust
implementation has more accurate handling of failed systemd units.
We slightly adjust the semantics of `system.switch.enable{,Ng}` to
not conflict with each other, so that `system.switch.enableNg` is
on by default, but turning off `system.switch.enable` still results
in no `switch-to-configuration` implementation being used. This
won’t break the configuration of anyone who already opted in to
`system.switch.enableNg` and is probably how the option should have
worked to begin with.
ReiserFS has not been actively maintained for many years. It has been
marked as obsolete since Linux 6.6, and is scheduled for removal
in 2025. A warning is logged informing users of this every time a
ReiserFS file system is mounted. It suffers from unfixable issues
like the year 2038 problem.
JFS is a slightly more ambiguous case. It also has not been actively
maintained for years; even in 2008 questions were being raised
about its maintenance state and IBM’s commitment to it, and some
enterprise distributions were opting not to ship support for it as
a result. It will [indefinitely postpone journal writes], leading
to data loss over potentially arbitrary amounts of time. Kernel
developers [considered marking it as deprecated] last year, but
no concrete decision was made. There have been [occasional fixes]
to the code since then, but even the developer of much of those was
not opposed to deprecating it.
[considered marking it as deprecated]: https://lore.kernel.org/lkml/Y8DvK281ii6yPRcW@infradead.org/
[indefinitely postpone journal writes]: https://www.usenix.org/legacy/events/usenix05/tech/general/full_papers/prabhakaran/prabhakaran.pdf
[occasional fixes]: https://www.phoronix.com/news/JFS-Linux-6.7-Improvements
Regardless of whether JFS should be removed from the kernel, with all
the implications for existing installations that entails, I think
it’s safe to say that no new Linux installation should be using
either of these file systems, and that it’s a waste of space and
potential footgun to be shipping support for them on our standard
installation media. We’re lagging behind other distributions on
this decision; neither is supported by Fedora’s installation media.
(It also just so happens that `jfsutils` is the one remaining package
in the minimal installer ISO that has reproducibility issues, due to
some cursed toolchain bug, but I’m not trying to Goodhart’s law
this or anything. I just think we shouldn’t be shipping it anyway.)
Exposes all currently available command-line arguments that were
missing, including some that were impossible to use with the catch-all
option `extraArgs` alone, requiring changes to other parts of the
system.
Those are now all self-contained in the module.
The service now uses systemd's `DynamicUsers`.