In practice, almost all requests to Hydra take longer than the default
timeout of 30 seconds.
This commit bumps all requests to the max timeout of 15 minutes. This
should hopefully make the hdyra-report.hs script more reliable and fail
less.
* luarocks-packages-updater: init
Goal is to make it possible to maintain out-of-tree luarocks packages
without needing to clone nixpkgs.
maintainers/scripts/update-luarocks-packages gets renamed to
pkgs/development/lua-modules/updater/updater.py
Once merged you can run for instance
nix run nixpkgs#luarocks-packages-updater -- -i contrib/luarocks-packages.csv -o contrib/generated-packages.nix
I also set the parallelism (--proc) to 1 by default else luarocks fails
because of https://github.com/luarocks/luarocks/issues/1540
* Update maintainers/scripts/pluginupdate.py
Co-authored-by: Marc Jakobi <mrcjkb89@outlook.com>
---------
Co-authored-by: Marc Jakobi <mrcjkb89@outlook.com>
* use attrname in log messages instead of github handle
* don't remove users simply for empty github handles, if their user
still exists (prevents #259555)
This will allow buliding bootstrap tools for platforms with
non-default libcs, like *-unknown-linux-musl.
This gets rid of limitedSupportSystems/systemsWithAnySupport. There
was no need to use systemsWithAnySupport for supportDarwin, because it
was always equivalent to supportedSystems for that purpose, and the
only other way it was used was for determining which platforms to
build the bootstrap tools for, so we might as well use a more explicit
parameter for that, and then we can change how it works without
affecting the rest of the Hydra jobs.
Not affecting the rest of the Hydra jobs is important, because if we
changed all jobs to use config triples, we'd end up renaming every
Hydra job. That might still be worth thinking about at some point,
but it's unnecessary at this point (and would be a lot of work).
I've checked by running
nix-eval-jobs --force-recurse pkgs/top-level/release.nix
that the actual bootstrap tools derivations are unaffected by this
change, and that the only other jobs that change are ones that depend
on the hash of all of Nixpkgs. Of the other jobset entrypoints that
end up importing pkgs/top-level/release.nix, none used the
limitedSupportedSystems parameter, so they should all be unaffected as
well.
The nixpkgs documentation mentions how to update out of tree plugins but
one problem is that it requires a nixpkgs clone.
This makes it more convenient.
I've had the need to generate vim plugins and lua overlays for other
projects unrelated to nix and this will make updates easier (aka just
run `nix run nixpkgs#vimPluginsUpdater -- --proc=1` or with the legacy commands:
`nix-shell -p vimPluginsUpdater --run vim-plugins-updater`.
I added an optional "nixpkgs" argument to command line parser, which is the path
towards a nixpkgs checkout. By default the current folder.
update-luarocks-packages: format with black
This change adds a flag --slow to hydra-report.sh get-report which
causes it to fetch the cheap evaluation overview endpoint (which only
contains build ids and meta data). The gathered information is then used
to request each build's status individually instead of in bulk which is
very slow, but useful as a last resort if the bulk endpoint times out.
Since every failure in the jobset means one request to get the log when
generating the list of newly broken packages, we need to add an option
to disable log requesting in case a lot of new breakage needs to be
entered.
If we want to push only one branch, we'll have to specify branch and
remote explicitly. Pushing to origin doesn't work for everyone, since
some of us have a origin remote that can't be pushed to. Using plain
`git push` has the problem that it'll try pushing all checked out
branchs which fails e.g. if some branches (staging, staging-next, …) are
behind their remote counterparts.
The solution is to require everyone to configure a per branch pushRemote
for haskell-updates which will then be used by merge-and-open-pr.sh.
* update.py: introduce subparsers for plugin updaters
This is preliminary work to help create more powerful plugin updaters.
Namely I would like to be able to "just add" plugins without refreshing
the older ones (helpful when github temporarily removes a user from
github due to automated bot detection).
Also concerning the lua updater, we pin some of the dependencies, and I
would like to be able to unpin the package without editing the csv
(coming in later PRs).
* doc/updaters: update command to update editor plugins
including vim, kakoune and lua packages
Co-authored-by: figsoda
cabal-install 3.10 has some quirky new logic for config, cache, …
directory discovery. We reimplement this in this simple bash script,
additionally respecting the CABAL_DIR environment variable.
This update adds support for $CABAL_DIR as well as the new
$XDG_CACHE_HOME location of the hackage db.
Since we maintain hackage-db, having the latest version always is nice
even though it has more reverse dependencies than the other libraries we
maintain.
When DEBUG is defined, the script just prints the URL's without actually
checking whether they're already cached or downloading/uploading anything.
That got broken because connecting to S3 now fails fast. This PR makes sure
we skip connecting to s3 in DEBUG mode.
Add a newtype for a package name and a package set. This is less for
correctness, and more just to make the code a little easier to read
through without having to keep in mind what each Text refers to.
This script is heavily based on the script used to update all python
libraries at
pkgs/development/interpreters/python/update-python-libraries/update-python-libraries.py
The Octave Packages' website uses YAML as their basis, so we must
reformat to use YAML instead of JSON.
the script can output a list of sed commands to create the order it
expects to find. this was mainly useful for initially sorting the list,
but we'll keep it here for later reference.
Co-authored-by: Jörg Thalheim <Mic92@users.noreply.github.com>
nixpkgs:trunk also builds aarch64-darwin these days, so this forces our
hand a little bit. We can still refuse to care about failures _too_
much, but at least we will stop merging as big a rebuilds as we are
currently.
Previously when packages that required the git fetcher were updated, we
would wrongly rely on `nix-prefetch-url`, which would reliable break the
hash.
Instead we need to use `nix-prefetch-git` to determine the proper hash,
when the relevant attributes are present.
`ghc-pkg list` tells us everything hackage2nix needs to know. In the
past the core-packages list and compiler setting in hackage2nix was
maintained manually which inevitably leads to it being forgot once in a
while – this will then mess with flag resolution when generating the
package set in some cases. Luckily, we can just let a simple derivation
do this for us.
Resolves#202621.
Since we now have a versioned configuration-ghc-*.nix file for GHC HEAD,
we don't need to add a super special case to the package set logic in
test-configurations.nix anymore. We can just create a versioned
attribute for the ghcHEAD package set (which is not exposed) and keep
using the normal discovery logic.
The only tricky bit is that GHC HEAD's configuration file is named after
the GHC release that will be branched off from it, so a little bit of
arithmetic is involved.
This will allow tests.pkg-config.defaultPkgConfigPackages to run on
hydra without breaking the tarball job.
Regarding the use of null, I'll quote 473ac96 which does lib.hydraJob.
By allowing null, we allow code to avoid filterAttrs, improving
laziness in real world use cases.
Specifically, this strategy prevents infinite recursion errors,
performance issues and possibly other errors that are unrelated to
the user's code.
The Haskell Hydra report generator
(`maintainers/scripts/haskell/hydra-report.hs`) uses this
`maintainer-handles.nix` script for generating a mapping of email
addresses to GitHub handles.
This `maintainer-handles.nix` script is necessary because the Haskell
Hydra report generator gets Hydra job status info as input, but needs to
ping users on GitHub. Hydra job status info only contains user emails (not
GitHub handles). So the `maintainer-handles.nix` script is necessary
for looking up GitHub handles from email addresses.
This commit fixes the `maintainers-handles.nix` code to ignore
maintainers that don't have email addresses. The code was originally
assuming that all maintainers have email addresses, but there was
recently a maintainer added without an email address.
Move the manpage-to-URL mapping to `doc/manpage-urls.json` so that we can
reuse that file elsewhere, and generate the `link-manpages.lua` filter from
that file.
Also modify the Pandoc filter so that it doesn't wrap manpages that are
already inside a link.
Keeping a Lua filter is essential for speed: a Python filter would
increase the runtime `md-to-db.sh` from ~20s to ~30s (but Python is not
to blame; marshalling Pandoc types to and from JSON is a costly operation).
Parsing in Lua seems tedious, so I went with the Nix way.
- use `restrict-eval` so that we're not affected by the user's environment
- use jq instead of the horrible echo+sed hack
The second point also fixes the indentation before each line to be two
spaces instead of one, so I set it back to one space to avoid a diff.