sometimes takes a while for upstream to publish on pypi so switch to using github source for master, pkg, worker and github releases for the plugins which require built assets
ChangeLog: https://github.com/grafana/grafana/releases/tag/v11.1.0
A few additional changes were necessary:
* Grafana now refuses to listen on non-IP values and aborts with
Error: ✗ *apiserver.service run error: invalid IP address: localhost
* packages/grafana-e2e doesn't exist anymore, so the build fixes for
that could be removed.
* Make sure we always compile the binary parts of cypress.
* Grafana tends to set the minimum Go version to the latest Go version
available now[1].
* The `url` of a datasource was set to `localhost` by default. I don't
expect anybody to have not set it when needed, also Grafana aborts now
if `url` is non-empty for a random walk datasource (which broke the VM
tests).
[1] https://github.com/grafana/grafana/pull/88794#discussion_r1630563467
It was wrong to use StateDirectory to keep the scion-control and
scion-router runtime databases on disk for the next run. I observed that
doing this means a reboot, or power outage can corrupt the temporary
runtime databases for the next service start, leading scion ping and
other functionality to stop working permanently, since those files are
not managed in an atomic manner by the golang code.
`gitlab` >= 17.0 requires at least `postgresql` >= 14.9. GitLab users
are advised to follow the mentioned steps in the manual to upgrade their
PostgreSQL installation.
Importing PATH into the systemd environment is done by default in
Hyprland v0.41.2+ (https://github.com/hyprwm/Hyprland/pull/6640)
We soft deprecate this option here for versions >= 0.41.2.
This hopefully clarifies that the preset configures the hook to expose
"nvidia devices", which includse both the userspace driver and the
device nodes.
The derivations still declare requiredSystemFeatures = [ "cuda" ] to
explicitly indicate they need to use the CUDA functionality and expect a
libcuda.so and a CUDA-capable device. Ideally, we'd also include the
specific CUDA architectures (sm_86, etc) in feature names.
Derivations that use a co-processor but do not care about the vendor or
even the particular interface may ask for the more generic "opengl",
"vulkan", or "gpu" features. It is then responsibility of the host
declaring the support for this feature to ensure the drivers and
hardware are appropriately set up.
When `services.resolved` is enabled, then `resolve [!UNAVAIL=return]`
is added to `system.nssDatabases.hosts` with priority 501,
which prevents lower-priority NSS modules from running
unless systemd-resolved is not available.
Quoting from `man nss-resolve`:
> To activate the NSS module, add "resolve [!UNAVAIL=return]" to the line
> starting with "hosts:" in /etc/nsswitch.conf. Specifically, it is
> recommended to place "resolve" early in /etc/nsswitch.conf's "hosts:"
> line. It should be before the "files" entry, since systemd-resolved
> supports /etc/hosts internally, but with caching. To the contrary, it
> should be after "mymachines", to give hostnames given to local VMs and
> containers precedence over names received over DNS. Finally, we
> recommend placing "dns" somewhere after "resolve", to fall back to
> nss-dns if systemd-resolved.service is not available.
Note that the man page (just) recommends "early" and means with this
"before the 'files' and 'dns' entries". It does not insist on being
first or excluding other modules.
For this reason, libvirt NSS modules should run before the `resolve`
module. They should come right next to `mymachines` because both are
conceptually very similar -- they resolve local VMs/containers.
Since the data source of the libvirt NSS modules are local
plain text files (see source code of the libvirt NSS module),
no performance impact is expected form this raise of priorities.
Other NSS modules in NixOS also explicitly set their priority, which is
why this change increases consistency.
Fixes#322022
Support for *runner registration tokens* is deprecated since GitLab
16.0, has been disabled by default in GitLab 17.0 and will be removed in
GitLab 18.0, as outlined in the [GitLab documentation].
It is possible to [re-enable support for runner registration tokens]
until GitLab 18.0, to prevent the registration workflow from
breaking.
*Runner authentication tokens*, the replacement for registration tokens,
have been available since GitLab 16.0 and are expected to be defined in
the `CI_SERVER_TOKEN` environment variable, instead of the previous
`REGISTRATION_TOKEN` variable.
This commit adds a new option
`services.gitlab-runner.services.<name>.authenticationTokenConfigFile`.
Defining such option next to
`services.gitlab-runner.services.<name>.registrationConfigFile` brings
the following benefits:
- A warning message can be emitted to notify module users about the
upcoming breaking change with GitLab 17.0, where *runner registration
tokens* will be disabled by default, potentially disrupting
operations.
- Some configuration options are no longer supported with *runner
authentication tokens* since they will be defined when creating a new
token in the GitLab UI instead. New warning messages can be emitted to
notify users to remove the affected options from their configuration.
- Once support for *registration tokens* has been removed in GitLab 18,
we can remove
`services.gitlab-runner.services.<name>.registrationConfigFile` as
well and make module users configure an *authentication token*
instead.
This commit changes the option type of
`services.gitlab-runner.services.<name>.registrationConfigFile` to
`with lib.types; nullOr str` to allow configuring an authentication
token in
`services.gitlab-runner.services.<name>.authenticationTokenConfigFile`
instead.
A new assertion will make sure that
`services.gitlab-runner.services.<name>.registrationConfigFile` and
`services.gitlab-runner.services.<name>.authenticationTokenConfigFile`
are mutually exclusive. Setting both at the same time would not make
much sense in this case.
[GitLab documentation]: https://docs.gitlab.com/17.0/ee/ci/runners/new_creation_workflow.html#estimated-time-frame-for-planned-changes
[re-enable support for runner registration tokens]: https://docs.gitlab.com/17.0/ee/ci/runners/new_creation_workflow.html#prevent-your-runner-registration-workflow-from-breaking
My reasons following Mint are:
1. Geary signed https://stopthemingmy.app, per request we shouldn't pre-ship it under a themed desktop environment.
See also b7937b4509
2. Hexchat is still gtk2 and is not maintained anymore, Mint encourages switching to Matrix instead.
See also https://blog.linuxmint.com/?p=4675 ("Joining the Matrix")
The spook package includes two separate integrations and the module was
adapted to account for that scenario. Add a test to ensure the changed
keeps working correctly going forward.
frenck/spook includes a second manifest for an integration. The current
copyCustomComponents script assumed that only one component directory
will be found, which in this case resulted in a malformed symlink
destination:
lrwxrwxrwx 1 hass hass 224 Jun 23 17:23 spook -> '/nix/store/r41ics22zs578avzqf7x86plcgn2q71h-python3.12-frenck-spook-v3.0.1/custom_components/spook/integrations/spook_inverse'$'\n''/nix/store/r41ics22zs578avzqf7x86plcgn2q71h-python3.12-frenck-spook-v3.0.1/custom_components/spook'
Since stalwart-mail 0.6.0, queue and report files are located in
the shared `storage.{data,blob}` stores. The `{queue,report}.path`
settings no longer had any effect since then.
I'm also removing the creation of the associated extra directories
in the `preStart` script. This should not cause any issue with old
setups since 0.6.0 was already packaged when 24.05 was released.
The GIT_PROJECT_ROOT directory is now created at runtime instead of
being assembled at build time.
This fixes ownership issues which prevented those repositories to be
read by users other than root. This also avoids creating symlinks in
the nix store pointing to the outside.
This adds a few options to properly set the ownership and permissions
on UNIX local sockets, set to private by default.
Previously, the created UNIX local sockets could be used by any local
user. This was especially problematic when fcgiwrap is running as root
(the default).
Since we're already introducing some backward-incompatible change in
the previous commit, let's make the options more tidy, also preparing
for the introduction of more options.
This also fixes the documentation of the user and group options which
are applying to the service's running user, not the socket.
This allows configuring and starting independent instances of the
fgciwrap service, each with their own settings and running user,
instead of having to share a global one.
I could not use `mkRenamedOptionModule` on the previous options
because the aliases conflict with `attrsOf submodule` now defined at
`services.fcgiwrap`. This makes this change not backward compatible.
There are several GPUs that ROCm doesn't officially support but
will work correctly if ROCm is directed to treat the GPU as a different
one that is supported and has a similar architecture.
This can be done by setting `HSA_OVERRIDE_GFX_VERSION`.
Ollama has documentation on this topic: https://github.com/ollama/ollama/blob/main/docs/gpu.md#amd-radeon