We currently package all CUDA versions from 10.0 onwards. In
some cases, CUDA is the only thing preventing us from removing old
versions of GCC. Since we currently don’t deprecate or remove CUDA
versions, this will be an increasing drag on compiler maintenance in
Nixpkgs going forward unless we establish a sensible policy. After
discussing this with @SomeoneSerge in the context of old versions
of GCC, I learned that there was already a desire to remove at least
versions prior to 11.3, as those versions were only packaged in the
old “runfile” format, but that it was blocked on someone doing
the work to warn about the upcoming deprecation for a release cycle.
This change adds a release note and warnings indicating that CUDA 10.x
and 11.x will be removed in Nixpkgs 25.05, about 8 months from now.
I chose this version cut‐off because these versions of CUDA require
GCC < 12. GCC releases a major version every year, and seems to
support about four releases at a time, releasing the last update to
the oldest version and marking it as unsupported on their site around
the time of the release of the next major version. Therefore, by the
time of the 25.05 release, we should expect GCC 15 to be released
and GCC 11 to become unsupported. Adding a warning and communicating
the policy of only shipping CUDA versions that work with supported
compilers in the release notes means that we should be able to
clean up old versions as required without any issue or extensive
deprecation period in future, without obligating us to do so if there
is a strongly compelling reason to be more lenient. That should help
solve both shipping an indefinitely‐growing list of CUDA versions
and an indefinitely‐growing list of GCC and LLVM versions.
As I’m not a user of CUDA myself, I can’t be sure of how sensible
this version support policy is, but I think it’s fair to say that
it’s reasonable for Nixpkgs to choose not to maintain compiler
versions that are unsupported upstream just for the sake of versions
of CUDA that are also unmaintained. CUDA 11.x has not received an
update for two years already, and would only become unsupported in
Nixpkgs in over half a year’s time.
CUDA 10.x is currently unused in‐tree except for the unmaintained
Caffe and NVIDIA DCGM, which depends on multiple CUDA versions solely
so that it can provide plugins for those versions. The latest DCGM
version has already removed support for CUDA 10.x and is just awaiting
an update in Nixpkgs. They maintain a list of supported versions to
build plugins for in their CMake build system, so it should be simple
enough for us to only build support for the versions of CUDA that we
support in Nixpkgs.
From what I can tell, CUDA 11.x is currently used by the following
packages other than DCGM:
* `catboost`, because of
<https://github.com/catboost/catboost/issues/2540>. It looks like
upstream has since redesigned this part of their build system, so
perhaps the problem is no longer present, or would be easier to fix.
* `magma_2_6_2`, an old version from before upstream added CUDA
12 support. This seems okay to break to me; that version is not
maintained and will never be updated for new CUDA versions, and
the CUDA support is optional.
* `paddlepaddle`, which, uh, also requires OpenSSL 1.1 of all
things. <https://github.com/PaddlePaddle/Paddle/issues/67571>
states that PaddlePaddle supports up to 12.3.
* `python3Packages.cupy`, which is listed as “possibly incompatible
with cutensor 2.0 that comes with `cudaPackages_12`”. I’m
not sure what the “possibly” means here, but according to
<https://github.com/cupy/cupy/tree/v13.3.0?tab=readme-ov-file#installation>
they ship binary wheels using CUDA 12.x so I think this should
be fine.
* `python3Packages.tensorrt`, which supports CUDA 12.x going by
<https://github.com/NVIDIA/TensorRT/blob/release/10.4/CMakeLists.txt#L111>.
* TensorFlow, which has a link to
<https://www.tensorflow.org/install/source#gpu> above the
`python3Packages.tensorflow-bin` definition, but that page lists
the versions we package as supporting CUDA 12.x.
Given the years since CUDA 11.x received any update upstream, and the
seemingly very limited set of packages that truly require it, I think
the policy of being able to drop versions that require unsupported
compilers starting from the next Nixpkgs release is a reasonable
one, but of course I’m open to feedback from the CUDA maintainers
about this.
cudaPackages.cuda_compat: ignore missing libs provided at runtime
cudaPackages.gpus: Jetson should never build by default
cudaPackages.flags: don't build Jetson capabilities by default
cudaPackages: re-introduce filter for pre-existing CUDA redist packages in overrides
cudaPackages: only recurseIntoAttrs for the latest of each major version
cudaPackages.nvccCompatabilities: use GCC 10 through CUDA 11.5 to avoid a GLIBC incompatability
cudaPackages.cutensor: acquire libcublas through cudatoolkit prior to 11.4
cudaPackages.cuda_compat: mark as broken on aarch64-linux if not targeting Jetson
cudaPackages.cutensor_1_4: fix build
cudaPackages: adjust use of autoPatchelfIgnoreMissingDeps
cudaPackages.cuda_nvprof: remove unecessary override to add addOpenGLRunpath
cudaPackages: use getExe' to avoid patchelf warning about missing meta.mainProgram
cudaPackages: fix evaluation with Nix 2.3
cudaPackages: fix platform detection for Jetson/non-Jetson aarch64-linux
python3Packages.tensorrt: mark as broken if required packages are missing
Note: evaluating the name of the derivation will fail if tensorrt is not present,
which is why we wrap the value in `lib.optionalString`.
cudaPackages.flags.getNixSystem: add guard based on jetsonTargets
cudaPackages.cudnn: use explicit path to patchelf
cudaPackages.tensorrt: use explicit path to patchelf
Update cutensor to the latest version compatible with its only dependent
in nixpkgs, cupy. Future work might wish to make multiple versions
available. Also update the URL format for all known versions.
Refactor derivation to pick the version that supports the current CUDA version.
Based on the implementation of the same concept in the cudnn derivation.
Add derivation for TensorRT 8, a high-performance deep learning interface SDK
from NVIDIA, which is at this point non-redistributable. The current version
aldo requires CUDA 11, so this is left out of the cudaPackages_10* scopes.
There are many different versions of the `cudatoolkit` and related
cuda packages, and it can be tricky to ensure they remain compatible.
- `cudaPackages` is now a package set with `cudatoolkit`, `cudnn`, `cutensor`, `nccl`, as well as `cudatoolkit` split into smaller packages ("redist");
- expressions should now use `cudaPackages` as parameter instead of the individual cuda packages;
- `makeScope` is now used, so it is possible to use `.overrideScope'` to set e.g. a different `cudnn` version;
- `release-cuda.nix` is introduced to easily evaluate cuda packages using hydra.