If the host is shutting down, machinectl may fail because it's
bus-activated and D-Bus will be shutting down. So just send a signal
to the leader process directly.
Fixes#6212.
The Nixos Qemu VM that are used for VM tests can now start without
boot menu even when using a bootloader.
The Nixos Qemu VM with bootloader can emulate a EFI boot now.
- Create container nixos profile
- Create lxc-container nixos config using container nixos profile
- Docker nixos image, use nixos profile for its base config
Especially new users could be confused by this, so we're now marking
services.virtualbox.enable as obsolete and defaulting to
services.virtualboxGuest.enable instead. I believe this now makes it
clear, that this option is for guest additions only.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Regression introduced in f496c3cbe4.
Previously when we used security.initialRootPassword, the default
priority for this option was 1001, because it was a default value set by
the option itself.
With the mentioned commit, it is no longer an option default but a
mkDefault, which is priority 1000.
I'm setting this to 150 now, as test-instrumentation.nix is using this
for overriding other options and because I think it still makes it
possible to simple-override it, because if no priority is given, we get
priority 100.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This fixes the issue when the LXC emulator binary is garbage collected
and breaks libvirtd containers, because libvirtd XML file still refers
to GC'ed store path.
We already have a fix for QEMU, this commit extends the fix to cover LXC
too.
This tells the sad tale of @the-kenny who had bind-mounted his home
directory into a container. After doing `nixos-container destroy` he
discovered that his home directory went from "full of precious data" to
"no more data".
We want to avoid having similar sad tales in the future, so this now also
check this in the containers VM test.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Fixes a leftover from 330fadb706.
We're using systemd dbus notifications now and this leftover caused the
startup notification to fail.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This allows creating a container from an existing system store path,
which is especially nice for NixOps-deployed hosts because they don't
need a Nixpkgs tree anymore.
Systemd in a container will call sd_notify when it has finished
booting, so we can use that to signal that the container is
ready. This does require some fiddling with $NOTIFY_SOCKET.
Previously "machinectl reboot/poweroff" brutally killed the container,
as did "systemctl stop/restart". And reboot didn't actually work. Now
everything is fine.
Previously "machinectl reboot/poweroff" brutally killed the container,
as did "systemctl stop/restart". And reboot didn't actually work. Now
everything is fine.
curl does not retry if it is unable to connect to the metadata server.
For some reason, when creating a new AMI with a recent nixpkgs, the
metadata server would not be available when fetch-ec2-data ran. Switching
to wget that can retry even on TCP connection errors solved this problem.
I also made the fetch-ec2-data depend on ip-up.target, to get it to start
a bit later.
Removed the 'wait for GCE metadata service' job, as it was causing
issues with the metadata service (likely some firewall or something).
In stead, use wget with retries (including connection refused) in
stead or curl for fetching the SSH keys. Also made the stdout/-err
of this job appear in the console.
This version of module has disabled socketActivation, because until
nixos upgrade systemd to at least 214, systemd does not support
SocketGroup. So socket is created with "root" group when
socketActivation enabled. Should be fixed as soon as systemd upgraded.
Includes changes from #3015 and supersedes #3028
By setting a line like
MACVLANS="eno1"
in /etc/containers/<name>.conf, the container will get an Ethernet
interface named mv-eno1, which represents an additional MAC address on
the physical eno1 interface. Thus the container has direct access to
the physical network. You can specify multiple interfaces in MACVLANS.
Unfortunately, you can't do this with wireless interfaces.
Note that dhcpcd is disabled in containers by default, so you'll
probably want to set
networking.useDHCP = true;
in the container, or configure a static IP address.
To do: add a containers.* option for this, and a flag for
"nixos-container create".
Fixes#2379.
The new name was a misnomer because the values really are X11 video
drivers (e.g. ‘cirrus’ or ‘nvidia’), not OpenGL implementations. That
it's also used to set an OpenGL implementation for kmscon is just
confusing overloading.
By default, socat only waits 0.5s for the remote side to finish after
getting EOF on the local side. So don't close the local side, instead
wait for socat to exit when the remote side finishes.
http://hydra.nixos.org/build/10663282
By enabling ‘services.openssh.startWhenNeeded’, sshd is started
on-demand by systemd using socket activation. This is particularly
useful if you have a zillion containers and don't want to have sshd
running permanently. Note that socket activation is not noticeable
slower, contrary to what the manpage for ‘sshd -i’ says, so we might
want to make this the default one day.
The ability for unprivileged users to mount external media is useful
regardless of the desktop environment. Also, since udisks2 is
activated on-demand, it doesn't add any overhead if you're not using it.
Apparently systemd is now smart enough to figure out predictable names
for QEMU network interfaces. But since our tests expect them to be
named eth0/eth1..., this is not desirable at the moment.
http://hydra.nixos.org/build/10418789
This used to work with systemd-nspawn 203, because it bind-mounted
/etc/resolv.conf (so openresolv couldn't overwrite it). Now it's just
copied, so we need some special handling.
Using pkgs.lib on the spine of module evaluation is problematic
because the pkgs argument depends on the result of module
evaluation. To prevent an infinite recursion, pkgs and some of the
modules are evaluated twice, which is inefficient. Using ‘with lib’
prevents this problem.
Update VirtualBox (and implicitly VirtualBox Guest Additions) to 4.3.6
and Oracle VM VirtualBox Extension Pack to 91406
Conflicts due to minor upgrade in the mean time
Conflicts:
nixos/modules/virtualisation/virtualbox-guest.nix
pkgs/applications/virtualization/virtualbox/default.nix
pkgs/applications/virtualization/virtualbox/guest-additions/default.nix
Needed for the installer tests, since otherwise mounting a filesystem
may fail as it has a last-mounted date in the future.
http://hydra.nixos.org/build/9846712
The command nixos-container can now create containers. For instance,
the following creates and starts a container named ‘database’:
$ nixos-container create database
The configuration of the container is stored in
/var/lib/containers/<name>/etc/nixos/configuration.nix. After editing
the configuration, you can make the changes take effect by doing
$ nixos-container update database
The container can also be destroyed:
$ nixos-container destroy database
Containers are now executed using a template unit,
‘container@.service’, so the unit in this example would be
‘container@database.service’.
For example, the following sets up a container named ‘foo’. The
container will have a single network interface eth0, with IP address
10.231.136.2. The host will have an interface c-foo with IP address
10.231.136.1.
systemd.containers.foo =
{ privateNetwork = true;
hostAddress = "10.231.136.1";
localAddress = "10.231.136.2";
config =
{ services.openssh.enable = true; };
};
With ‘privateNetwork = true’, the container has the CAP_NET_ADMIN
capability, allowing it to do arbitrary network configuration, such as
setting up firewall rules. This is secure because it cannot touch the
interfaces of the host.
The helper program ‘run-in-netns’ is needed at the moment because ‘ip
netns exec’ doesn't quite do the right thing (it remounts /sys without
bind-mounting the original /sys/fs/cgroups).
These are stored on the host in
/nix/var/nix/{profiles,gcroots}/per-container/<container-name> to
ensure that container profiles/roots are not garbage-collected.
On the host, you can run
$ socat unix:<path-to-container>/var/lib/login.socket -,echo=0,raw
to get a login prompt. So this allows logging in even if the
container has no SSH access enabled.
You can also do
$ socat unix:<path-to-container>/var/lib/root-shell.socket -
to get a plain root shell. (This socket is only accessible by root,
obviously.) This makes it easy to execute commands in the container,
e.g.
$ echo reboot | socat unix:<path-to-container>/var/lib/root-shell.socket -
It is parameterized by a function that takes a name and evaluates to the
option type for the attribute of that name. Together with
submoduleWithExtraArgs, this subsumes nixosSubmodule.
This is a rather large commit that switches user/group creation from using
useradd/groupadd on activation to just generating the contents of /etc/passwd
and /etc/group, and then on activation merging the generated files with the
files that exist in the system. This makes the user activation process much
cleaner, in my opinion.
The users.extraUsers.<user>.uid and users.extraGroups.<group>.gid must all be
properly defined (if <user>.createUser is true, which it is by default). My
pull request adds a lot of uids/gids to config.ids to solve this problem for
existing nixos services, but there might be configurations that break because
this change. However, this will be discovered during the build.
Option changes introduced by this commit:
* Remove the options <user>.isSystemUser and <user>.isAlias since
they don't make sense when generating /etc/passwd statically.
* Add <group>.members as a complement to <user>.extraGroups.
* Add <user>.passwordFile for setting a user's password from an encrypted
(shadow-style) file.
* Add users.mutableUsers which is true by default. This means you can keep
managing your users as previously, by using useradd/groupadd manually. This is
accomplished by merging the generated passwd/group file with the existing files
in /etc on system activation. The merging of the files is simplistic. It just
looks at the user/group names. If a user/group exists both on the system and
in the generated files, the system entry will be kept un-changed and the
generated entries will be ignored. The merging itself is performed with the
help of vipw/vigr to properly lock the account files during edit.
If mutableUsers is set to false, the generated passwd and group files will not
be merged with the system files on activation. Instead they will simply replace
the system files, and overwrite any changes done on the running system. The
same logic holds for user password, if the <user>.password or
<user>.passwordFile options are used. If mutableUsers is false, password will
simply be replaced on activation. If true, the initial user passwords will be
set according to the configuration, but existing passwords will not be touched.
I have tested this on a couple of different systems and it seems to work fine
so far. If you think this is a good idea, please test it. This way of adding
local users has been discussed in issue #103 (and this commit solves that
issue).