This is to address a regression introduced in #131118.
When syncing the first dataset, syncoid expects that the target
dataset doesn't exist to have a clean slate to work with. So during
runtime we'll check if the target dataset does exist and if it doesn't
- delegate the permissions to the parent dataset instead.
But then, on unallow, we do the unallow on both the target and the
parent since the target dataset should have been created at this
point, so the unallow can't know which dataset that got permissions
just by which datasets exists.
I noticed this minor grammar mistake when running update.nix, and then
while grepping to find the source I noticed we had it a few times in
Nixpkgs. Just as easy to fix treewide as it was to fix the one
occurrence I noticed.
This option allows basic configuration of the compression technique
used in the backup script. Specifically it adds `none` and `zstd` as
new alternatives, keeping `gzip` as the default.
When sending or receiving datasets with the old implementation it
wouldn't matter which dataset we were sending or receiving, we would
always delegate permissions to the entire pool.
Previously, a failed backup would always overwrite ${db}.sql.gz,
because the bash `>` redirect truncates the file; even if the
backup was going to fail.
On the next run, the ${db}.prev.sql.gz backup would be
overwritten by the bad ${db}.sql.gz.
Now, if the backup fails, the ${db}.in-progress.sql.gz is in an
unknown state, but ${db}.sql.gz will not be written.
On the next run, ${db}.prev.sql.gz (our only good backup) will
not be overwritten because ${db}.sql.gz does not exist.
Or … none! Because forcing a string always results in an OnCalender=
setting, but an empty string leads to an empty value.
> postgresqlBackup-hass.timer: Timer unit lacks value setting. Refusing.
or
> postgresqlBackup-miniflux.timer: Cannot add dependency job, ignoring: Unit postgresqlBackup-miniflux.timer has a bad unit file setting.
I require the postgresqlBackup in my borgbackup unit, so I don't
strictly need the timer and could previously set it to an empty list.
Current module add backups forever, with no way to prune old ones.
Add an option to remove backups after n full backups or after some
amount of time.
Also run duplicity cleanup to clean unused files in case some previous
backup was improperly interrupted.
As the only consequence of isSystemUser is that if the uid is null then
it's allocated below 500, if a user has uid = something below 500 then
we don't require isSystemUser to be set.
Motivation: https://github.com/NixOS/nixpkgs/issues/112647
By default, restic determines the location of the cache based on the XDG
base dir specification, which is `~/.cache/restic` when the environment
variable `$XDG_CACHE_HOME` isn't set.
As restic is executed as root by default, this resulted in the cache being
written to `/root/.cache/restic`, which is not quite right for a system
service and also meant, multiple backup services would use the same cache
directory - potentially causing issues with locking, data corruption,
etc.
The goal was to ensure, restic uses the correct cache location for a
system service - one cache per backup specification, using `/var/cache`
as the base directory for it.
systemd sets the environment variable `$CACHE_DIRECTORY` once
`CacheDirectory=` is defined, but restic doesn't change its behavior
based on the presence of this environment variable.
Instead, the specifier [1] `%C` can be used to point restic explicitly
towards the correct cache location using the `--cache-dir` argument.
Furthermore, the `CacheDirectoryMode=` was set to `0700`, as the default
of `0755` is far too open in this case, as the cache might contain
sensitive data.
[1] https://www.freedesktop.org/software/systemd/man/systemd.unit.html#Specifiers
After 733acfa140, syncoid would fail to
run if commonArgs did not include [ "--no-sync-snap" ], since it would
not have permissions to create or destroy snapshots.
The configured mbuffer path will be called on both the source and target
system. If you use pkgs.mbuffer from the source host and the target host
does not have this exact derivation, you will get a broken pipe when
sending snapshots. This is the case when transferring to a non-NixOS
system or to a host with a different mbuffer version.
Many options define their example to be a Nix value without using
literalExample. This sometimes gets rendered incorrectly in the manual,
causing confusion like in https://github.com/NixOS/nixpkgs/issues/25516
This fixes it by using literalExample for such options. The list of
option to fix was determined with this expression:
let
nixos = import ./nixos { configuration = {}; };
lib = import ./lib;
valid = d: {
# escapeNixIdentifier from https://github.com/NixOS/nixpkgs/pull/82461
set = lib.all (n: lib.strings.escapeNixIdentifier n == n) (lib.attrNames d) && lib.all (v: valid v) (lib.attrValues d);
list = lib.all (v: valid v) d;
}.${builtins.typeOf d} or true;
optionList = lib.optionAttrSetToDocList nixos.options;
in map (opt: {
file = lib.elemAt opt.declarations 0;
loc = lib.options.showOption opt.loc;
}) (lib.filter (opt: if opt ? example then ! valid opt.example else false) optionList)
which when evaluated will output all options that use a Nix identifier
that would need escaping as an attribute name.
* creating a local backup
* creating a borgbackup server
* backing up to a borgbackup server
* hints about the Vorta graphical desktop application
* Added documentation about Vorta desktop client
Tested the examples locally and with my borgbase.com account.
Currently to run borg job manually, you have to use systemctl:
```
$ systemctl start borgbackup-job-jobname.service
```
This commit makes wrappers around borg jobs available in $PATH, which have
BORG_REPO and connection args set correctly:
```
$ borg-job-jobname list
$ borg-job-jobname mount ::jobname-archive-2019-12-25T00:01:29 /mnt/some-path
$ borg-job-jobname create ::test /some/path
```
Closes: https://github.com/NixOS/nixpkgs/pull/64888
Co-authored-by: Danylo Hlynskyi <abcz2.uprola@gmail.com>
+ Fixing interrupted descriptions
+ Added more verbose descriptions
+ Addded <literal> to the descriptions
+ uniformly reformated descriptions to break at 80 chars
(cherry picked from commit c7945c8a97)
A centralized list for these renames is not good because:
- It breaks disabledModules for modules that have a rename defined
- Adding/removing renames for a module means having to find them in the
central file
- Merge conflicts due to multiple people editing the central file
When having backup jobs that persist to a removable device like an
external HDD, the directory shouldn't be created by an activation script
as this might confuse auto-mounting tools such as udiskie(8).
In this case the job will simply fail, with the former approach
udiskie ran into some issues as the path `/run/media/ma27/backup` was
already there and owned by root.
This adds a simple configuration for sending snapshots to a remote
system using zfs-replicate that ties into the autoSnapshot settings
already present in services.zfs.autoSnapshot.
Patch by @Baughn, who noticed these imports being very slow when run
serially with many datasets, so much that the service would time out and
fail, this fixes it.
For large setups it is useful to list all databases explicit
(for example if temporary databases are also present) and store them in extra
files.
For smaller setups it is more convenient to just backup all databases at once,
because it is easy to forget to update configuration when adding/renaming
databases. pg_dumpall also has the advantage that it backups users/passwords.
As a result the module becomes easier to use because it is sufficient
in the default case to just set one option (services.postgresqlBackup.enable).
This service will never run automatically, but it encapsulates the
necessary logic and configuration to run a restore of the latest
archive, and allows to hook more specific logic, such as loading
a database dump, via `postStart`.
A new option `explicitSymlinks` will set `-H` when creating an archive.
This option makes tarsnap follow any symlinks specified explicitly on
the commandline, but not any found inside the file tree.
A new option `followSymlinks` will set `-L` when creating an archive.
This option makes tarsnap follow any symlinks found anywhere in the file
tree instead of storing them as-is.
When the znapzend module was enabled for the first time with pure =
true; then the list of previous entries is empty, but xargs still tried
to execute a znapzendzetup delete command with no arguments, which made
it fail
This enables znapzend users to specify its full configuration through
NixOS options, without ever needing to use the stateful `znapzendzetup`
command.
This works by running znapzendzetup with the specified config in
ExecPre, just before the znapzend daemon is started.
There is also the `pure` option which will clear all previous znapzend setups,
making it as stateless as can get, as only the setup declared in
configuration.nix will be persisted.
* Grants enough privileges to the configured user so that it can run
mysqldump.
* Adds a nixos test.
* Use systemd timers instead of a cronjob (by @fadenb).
* Creates a new user for backups by default, instead of using mysql
user.
* Ensures that backup user has write permissions on backup location.
* Write backup to a temporary file before renaming so that a failed
backup won't overwrite the previous backup, and so that the backup
location will never contain a partial backup.
Breaking changes:
* Renamed period to calendar to reflect the change in how to
configure the backup time.
* A failed backup will no longer result in cron sending an e-mail --
users' monitoring systems must be updated.
Resolves#24728
* the major change is to set TARGETDIR=${vardir}, and symlink from
${vardir} back to ${out} instead of the other way around. this
gives CP more liberty to write to more directories -- in particular
it seems to want to write some configuration files outside of conf?
* run.conf does not need 'export'
* minor tweaks to CrashPlanDesktop.patch
- add missing types in module definitions
- add missing 'defaultText' in module definitions
- wrap example with 'literalExample' where necessary in module definitions
Two concurrent tarsnap backups cannot be run at the same time with the
same keys - completely separate sets of keys must be generated for each
archive in this case, if you want backups to overlap.
This extends the archives attrset to support a 'keyfile' option, which
defaults to /root/tarsnap.key like the top-level attribute.
With this change, if you generate two keys with tarsnap-keygen(1) and
use each of those separately for each archive, you can backup
concurrently.
Signed-off-by: Austin Seipp <aseipp@pobox.com>
Tarsnap locks the cachedir during backup, meaning if you specify
multiple backups with a shared cache that might overlap (for example,
one backup may take an hour), secondary backups will fail. This isn't
very nice behavior for the obvious reasons.
This splits the cache dirs for each archive appropriately. Note that
this will require a rebuild of your archive caches (although if you were
only using one archive for your whole system, you can just move the
directory).
Signed-off-by: Austin Seipp <aseipp@pobox.com>
A machine may not always be active (or online!) when a backup timer
triggers, meaning backups can be missed - now we properly set the
tarsnap timer's Persistent option so systemd will run the command even
when the machine wasn't online at that exact time.
However, we also need to make sure that we can contact the tarsnap
server reliably before we start the backup. So, we attempt to ping the
access endpoint in a loop with a sleep, before continuing.
This fixes#8823.
Signed-off-by: Austin Seipp <aseipp@pobox.com>
Tarsnap locks the cachedir during backup, meaning if you specify
multiple backups with a shared cache that might overlap (for example,
one backup may take an hour), secondary backups will fail. This isn't
very nice behavior for the obvious reasons.
This splits the cache dirs for each archive appropriately. Note that
this will require a rebuild of your archive caches (although if you were
only using one archive for your whole system, you can just move the
directory).
Signed-off-by: Austin Seipp <aseipp@pobox.com>
Major changes
- Port to systemd timers: for each archive configuration is created a
tarsnap@archive-name.timer which triggers the instanced service unit
- Rename the `config` option to `archives`
Minor/superficial improvements
- Restrict tarsnap service capabilities
- Use dirOf builtin
- Set executable bit for owner of tarsnap cache directory
- Set IOSchedulingClass to idle
- Humanize numbers when printing stats
- Rewrite most option descriptions
- Simplify assertion
Should bring most of the examples into a better consistency regarding
syntactic representation in the manual.
Thanks to @devhell for reporting.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
The Tarsnap module is now far more flexible, allowing individual
archives with individual options to be specified at will, allowing
granular backup schedules, etc.
Signed-off-by: Austin Seipp <aseipp@pobox.com>
Using pkgs.lib on the spine of module evaluation is problematic
because the pkgs argument depends on the result of module
evaluation. To prevent an infinite recursion, pkgs and some of the
modules are evaluated twice, which is inefficient. Using ‘with lib’
prevents this problem.