- Prevent store collison with the xserver for two files
- Stop gcc from complaining at build time about C and CXX flags
- Enable parallel building for this expression
- Move to the new way of calling Xorg and it's dependencies
This reduces diffoscope's closure size from 2470 MiB to 579 MiB by
leaving out some less crucial dependencies (like GHC and Free
Pascal). These can be re-enabled by turning on enableBloat.
This will eventually become the new stable branch (as unstable ones
are wont to do), but is worth having if you want to patch yesterday's
‘large’ files today, or need to apply patches already created with it.
“First release of the 3.1.x series. This is taken from the
"64bithash" branch.
- Adds support for -B values greater than 2GB, enabled by
-DXD3_USE_LARGESIZET=1 variable. [Enabled in nixpkgs.]
- Adds new performance and speed regression test, written in #Golang.
[Not enabled in nixpkgs.]
When compiled for large sizes, xdelta3 uses a 64bit checksum function.
This impacts both compression and speed.
Relative to 3.0.11, the new branch is currently 3-5% slower and
has 1-2% worse compression. Performance will be addressed in
future 3.1.x releases.”
Before executing the gnuplot executable the environment variable `GDFONTPATH`
is populated with a list of font directories, which is obtained from `fc-list`.
In that process we iterated over each line and called `dirname` on it, which
introduces a performance hit for loading and executing the external executable
`dirname` every time.
The new version avoids the loop.
The author of this patch measured a 42 fold performance improvement:
old version:
$ time ./gnuplot_old/bin/gnuplot -e ''
real 0m3.828s
user 0m0.392s
sys 0m0.465s
new version:
$ time ./gnuplot_new2/bin/gnuplot -e ''
real 0m0.091s
user 0m0.112s
sys 0m0.014s
The correctness of the value of `GDFONTPATH` was confirmed with the following
command and comparing its output between versions:
$ gnuplot -e 'print system("echo $GDFONTPATH")'
Broken since Aug 2015, but upstream has been dead for donkey's
years. Only dependent was systemtap. No reasonable way (or indeed
reason) to artificially keep this alive.
Aim for the head.
additional changes:
- tmate now depends on external libmsgpack and libssh
- postPatch is no longer useful as it applied to embedded msgpack
- regular automake can now be used
‘When upgrading to 0.29.0 you need to upgrade client as well as server
installations due to the locking and commandline interface changes
otherwise you’ll get an error msg about a RPC protocol mismatch or a
wrong commandline option. if you run a server that needs to support both
old and new clients, it is suggested that you have a “borg-0.28.2” and a
“borg-0.29.0” command. clients then can choose via e.g. “borg
–remote-path=borg-0.29.0 ...”.’
‘The default waiting time for a lock changed from infinity to 1 second
for a better interactive user experience. if the repo you want to access
is currently locked, borg will now terminate after 1s with an error
message. if you have scripts that shall wait for the lock for a longer
time, use –lock-wait N (with N being the maximum wait time in seconds).’
All changes: http://borgbackup.readthedocs.org/en/stable/changes.html
This patch is borrowed verbatim from Debian, where it is actively
maintained for each openssh update. It's also included in Fedora's
openssh package, in Arch linux as openssh-gssapi in the AUR, in MacOS
X, and presumably various other platforms and linux distros.
The main relevant parts of this patch:
- Adds several ssh_config options:
GSSAPIKeyExchange, GSSAPITrustDNS,
GSSAPIClientIdentity, GSSAPIServerIdentity
GSSAPIRenewalForcesRekey
- Optionally use an in-memory credentials cache api for security
My primary motivation for wanting the patch is the GSSAPIKeyExchange
and GSSAPITrustDNS features. My user ssh_config is shared across
several OSes, and it's a lot easier to manage if they all support the
same options.
added tldr to all-packages.nix
cleaned up style
added metadata
semicolons
didn't test on mac. removed platform
wrong types
fixed duplication of version
Currently the package is built with /var in $out/var. That fails when it
tries to create/write things at runtime (nix store is read-only).
Instead, tell it to use /var (global directory) and fixup the
installation phase so it doesn't touch /var (leave that for runtime).
This unbreaks the colord dbus service, which apparently is needed by
cups to create color profiles for printers.
Otherwise it will try to guess the log directory, and the guess might
not be the same if chroot builds are enabled or not.
The gruesome details from m4/sudo.m4:
````
dnl
dnl Where the I/O log files go, use /var/log/sudo-io if
dnl /var/log exists, else /{var,usr}/adm/sudo-io
dnl
AC_DEFUN([SUDO_IO_LOGDIR], [
AC_MSG_CHECKING(for I/O log dir location)
if test "${with_iologdir-yes}" != "yes"; then
iolog_dir="$with_iologdir"
elif test -d "/var/log"; then
iolog_dir="/var/log/sudo-io"
elif test -d "/var/adm"; then
iolog_dir="/var/adm/sudo-io"
else
iolog_dir="/usr/adm/sudo-io"
fi
if test "${with_iologdir}" != "no"; then
SUDO_DEFINE_UNQUOTED(_PATH_SUDO_IO_LOGDIR, "$iolog_dir")
fi
AC_MSG_RESULT($iolog_dir)
])dnl
````
Running zpaq on an older but not ancient 64-bit Intel server aborts
with an ‘Illegal instruction’ error. Turns out the build expression
was using -march=native to generate distibution binaries...
Change this to more conservative, portable settings which should
cover ‘all’ CPUs. It may run slightly slower — but that at least
implies running.
As a nice side effect, all common compile flags are now back in
`compileFlags` whence they came, and actually used consistently.
http://hydra.nixos.org/eval/1234895
The mass errors on Hydra seem transient; I verified ghc on i686-linux.
Only darwin jobs are queued ATM. There's a libpng security update
included in this merge, so I don't want to wait too long.
This improves our Bundler integration (i.e. `bundlerEnv`).
Before describing the implementation differences, I'd like to point a
breaking change: buildRubyGem now expects `gemName` and `version` as
arguments, rather than a `name` attribute in the form of
"<gem-name>-<version>".
Now for the differences in implementation.
The previous implementation installed all gems at once in a single
derivation. This was made possible by using a set of monkey-patches to
prevent Bundler from downloading gems impurely, and to help Bundler
find and activate all required gems prior to installation. This had
several downsides:
* The patches were really hard to understand, and required subtle
interaction with the rest of the build environment.
* A single install failure would cause the entire derivation to fail.
The new implementation takes a different approach: we install gems into
separate derivations, and then present Bundler with a symlink forest
thereof. This has a couple benefits over the existing approach:
* Fewer patches are required, with less interplay with the rest of the
build environment.
* Changes to one gem no longer cause a rebuild of the entire dependency
graph.
* Builds take 20% less time (using gitlab as a reference).
It's unfortunate that we still have to muck with Bundler's internals,
though it's unavoidable with the way that Bundler is currently designed.
There are a number improvements that could be made in Bundler that would
simplify our packaging story:
* Bundler requires all installed gems reside within the same prefix
(GEM_HOME), unlike RubyGems which allows for multiple prefixes to
be specified through GEM_PATH. It would be ideal if Bundler allowed
for packages to be installed and sourced from multiple prefixes.
* Bundler installs git sources very differently from how RubyGems
installs gem packages, and, unlike RubyGems, it doesn't provide a
public interface (CLI or programmatic) to guide the installation of a
single gem. We are presented with the options of either
reimplementing a considerable portion Bundler, or patch and use parts
of its internals; I choose the latter. Ideally, there would be a way
to install gems from git sources in a manner similar to how we drive
`gem` to install gem packages.
* When a bundled program is executed (via `bundle exec` or a
binstub that does `require 'bundler/setup'`), the setup process reads
the Gemfile.lock, activates the dependencies, re-serializes the lock
file it read earlier, and then attempts to overwrite the Gemfile.lock
if the contents aren't bit-identical. I think the reasoning is that
by merely running an application with a newer version of Bundler, you'll
automatically keep the Gemfile.lock up-to-date with any changes in the
format. Unfortunately, that doesn't play well with any form of
packaging, because bundler will immediately cause the application to
abort when it attempts to write to the read-only Gemfile.lock in the
store. We work around this by normalizing the Gemfile.lock with the
version of Bundler that we'll use at runtime before we copy it into
the store. This feels fragile, but it's the best we can do without
changes upstream, or resorting to more delicate hacks.
With all of the challenges in using Bundler, one might wonder why we
can't just cut Bundler out of the picture and use RubyGems. After all,
Nix provides most of the isolation that Bundler is used for anyway.
The problem, however, is that almost every Rails application calls
`Bundler::require` at startup (by way of the default project templates).
Because bundler will then, by default, `require` each gem listed in the
Gemfile, Rails applications are almost always written such that none of
the source files explicitly require their dependencies. That leaves us
with two options: support and use Bundler, or maintain massive patches
for every Rails application that we package.
Closes#8612
The configure script tries to probe whether /var/run exists when
determining the location for the pid file, which is not very nice when
doing chroot builds. Just set it explicitly to avoid the problem.
For reference, the culprit in configure.ac:
````
piddir=/var/run
if test ! -d $piddir ; then
piddir=`eval echo ${sysconfdir}`
case $piddir in
NONE/*) piddir=`echo $piddir | sed "s~NONE~$ac_default_prefix~"` ;;
esac
fi
AC_ARG_WITH([pid-dir],
[ --with-pid-dir=PATH Specify location of ssh.pid file],
...
````
Also, use the `install-nokeys` target in installPhase so we avoid
installing useless host keys into $out/etc/ssh and improve built purity
as well.
Relevant changes:
- Python version switched to Python 3
- ssdeep library got replaced with tlsh
- the 'magic' Python package got replaced with a different one
- Minor build system improvements == less work for us
Currently, building RPM with `python = python3` causes this:
checking for a Python interpreter with version >= 2.6... python3
checking for python3... /nix/store/dykqxnrwiz9drlcv2wy8lpvl3xvklx0g-python3-3.4.3/bin/python3
checking for python3 version... 3.4
checking for Python.h... yes
checking for library containing Py_Main... no
configure: error: missing python library
That comes from this snippet in configure.ac:
AC_SEARCH_LIBS([Py_Main],[python${PYTHON_VERSION} python],[
WITH_PYTHON_LIB="$ac_res"
],[AC_MSG_ERROR([missing python library])
])
So it's looking for (e.g) `libpython3.4.so` wheras we have `libpython3.4m.so`.
Patching the configure script to match seems to make that work (although
I don't really understand what the heck is this 'm' business about).
Removed path substitutions from setup.py because these should be handled
by the setuptools install prefix.
Except that the install prefix won't quite work until issue #4968 is
resolved.
In the meantime there are preInstall and postInstall scripts so that
this package continues to work with the nix python packaging
improvements.
It's not included in upstream beets but are linked in the documentation
under "Other plugins", see:
http://beets.readthedocs.org/en/v1.3.15/plugins/index.html#other-plugins
I found this one particularly useful for syncing files to varios media
players that refuse to read my FLAC files properly.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
After trying with a dozen files, it seems the bs1770gain backend is much
more reliable than the audiotools backend and especially does a better
job (well, compared to audiotools which either does doing nothing at all
or throws an exception) when used on alboms that contain different
sample rates/sizes.
Additionally, we already had a few issues regarding the audiotools
backend, even to the extent that @sampsyco almost wanted to drop it
upstream (see sampsyco/beets#1342).
Also related issues are #10376 and sampsyo/beets#1592.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
When a new version of colordiff is released the old tarball is moved to
the archive directory. This breaks builds until the derivation is
updated to the new version. This commit lets fetchurl know about the
archive URL.