now kills its process group when it exits. Without setsid, this
ends up killing the parent (i.e., the builder).
* Use port 445 instead of 139 because the CIFS kernel module tries
port 445 first. If there is an actual Samba running on the host, it
would end up connecting to that one instead of our own and fail.
svn path=/nixos/trunk/; revision=25016
- Added a backdoor option to the interactive run-vms script. This allows me to intergrate the virtual network approach with Disnix
- Small documentation fixes
Some explanation:
The nixos-build-vms command line tool can be used to build a virtual network of a network.nix specification.
For example, a network configuration (network.nix) could look like this:
{
test1 =
{pkgs, config, ...}:
{
services.openssh.enable = true;
...
};
test2 =
{pkgs, config, ...}:
{
services.openssh.enable = true;
services.xserver.enable = true;
}
;
}
By typing the following instruction:
$ nixos-build-vms -n network.nix
a virtual network is built, which can be started by typing:
$ ./result/bin/run-vms
It is also possible to enable a backdoor. In this case *.socket files are stored in the current directory
which can be used by the end-user to invoke remote instruction on a VM in the network through a Unix
domain socket.
For example by building the network with the following instructions:
$ nixos-build-vms -n network.nix --use-backdoor
and launching the virtual network:
$ ./result/bin/run-vms
You can find two socket files in your current directory, namely: test1.socket and test2.socket.
These Unix domain sockets can be used to remotely administer the test1 and test2 machine
in the virtual network.
For example by running:
$ socat ./test1.socket stdio
ls /root
You can retrieve the contents of the /root directory of the virtual machine with identifier test1
svn path=/nixos/trunk/; revision=24410
init script. This removes the need for the `systemConfig' boot
parameter; `init=<stage-2-init>' is enough. However, the GRUB menu
builder still needs to add `systemConfig' to the kernel command line
for compatibility with old configurations.
svn path=/nixos/trunk/; revision=23775
like `build-vm', but boots using the regular boot loader (i.e. GRUB
1 or 2) rather than booting directly from the kernel/initrd. Thus
it allows testing of GRUB.
svn path=/nixos/trunk/; revision=23747
higher than 800x600 work.
* Add a "Monitor" statement to the "Screen" section, because otherwise
the Monitor section is ignored.
svn path=/nixos/trunk/; revision=23068
guest connect to a Unix domain socket on the host rather than the
other way around. The former is a QEMU feature (guestfwd to a
socket) while the latter requires a patch (which we can now get rid
of).
svn path=/nixos/branches/boot-order/; revision=22331
caching. This makes a huge performance difference (e.g. from 4 MB/s
`dd' throughput to 140 MB/s on the Hydra machines). As the QEMU
manual says: "Some block drivers perform badly with
‘cache=writethrough’, most notably, qcow2."
svn path=/nixos/branches/boot-order/; revision=22248
When starting multiple VMs, some will have perfectly synchronised
clocks, while others will have their clocks run much slower (say, a
factor of 5).
svn path=/nixos/branches/boot-order/; revision=22195
to use the standard (coreutils) tools.
* Use util-linux's `switch_root' to switch over to the target root
FS. It automatically moves over the /dev, /proc and /sys from stage
1, so stage 2 doesn't need to set them up again.
svn path=/nixos/trunk/; revision=22085
machine can now declare an option `virtualisation.vlans' that causes
it to have network interfaces connected to each listed virtual
network. For instance,
virtualisation.vlans = [ 1 2 ];
causes the machine to have two interfaces (in addition to eth0, used
by the test driver to control the machine): eth1 connected to
network 1 with IP address 192.168.1.<i>, and eth2 connected to
network 2 with address 192.168.2.<i> (where <i> is the index of the
machine in the `nodes' attribute set). On the other hand,
virtualisation.vlans = [ 2 ];
causes the machine to only have an eth1 connected to network 2 with
address 192.168.2.<i>. So each virtual network <n> is assigned the
IP range 192.168.<n>.0/24.
Each virtual network is implemented using a separate multicast
address on the host, so guests really cannot talk to networks to
which they are not connected.
* Added a simple NAT test to demonstrate this.
* Added an option `virtualisation.qemu.options' to specify QEMU
command-line options. Used to factor out some commonality between
the test driver script and the interactive test script.
svn path=/nixos/trunk/; revision=21928
its default behaviour is to stop the emulator (i.e. suspend the VM).
For automated tests, this is bad, because is makes the VM appear to
hang without any error message. The "werror=report" flag causes
QEMU to report the problem to the VM. As a side effect QEMU exits
very elegantly:
[ 2.308668] end_request: I/O error, dev vda, sector 534400
[ 2.309611] Buffer I/O error on device vda, logical block 66800
...
*** glibc detected *** /nix/store/yhngqrww53j0aw7z7v4bv948x5g5fc3d-qemu-kvm-0.12.1.2/bin/qemu-system-x86_64: double free or corruption (!prev): 0x08e3e040 ***
Aborted
So I guess we now depend on a bug in QEMU :-)
svn path=/nixos/trunk/; revision=19703
modules that should be added to the initrd, but should only be
loaded on demand (e.g. by the kernel or by udev). This is
especially useful in the installation CD, where we now only load the
modules needed by the hardware.
* Enable automatic modprobing by udev in the initrd.
svn path=/nixos/trunk/; revision=18975
statically configured interface (i.e. we're not running dhclient).
Otherwise the ntpd job won't be triggered.
* Use the "-n" flag of "initctl emit" to send the event
asynchronously.
svn path=/nixos/branches/upstart-0.6/; revision=18227
driver (in services.xserver.videoDriver), the X server is now given
a set of drivers, and will use PCI ids to find the right one.
The only problem is that the choice of OpenGL driver (the
/var/run/opengl-driver symlink) depends on what driver is selected
at runtime (i.e. the NVIDIA implementation for "nvidia", and Mesa
for all other drivers). However this isn't a big problem right now
since "nvidia" isn't included in the default set of drivers anyway
for legal reasons.
* `services.xserver.resolutions' now defaults to [], meaning that the
X server should figure out the desired resolution(s) itself.
Likewise, `services.xserver.defaultDepth' defaults to 0 to let the X
server figure it out.
* Removed some options from xorg.conf that no longer appear needed
("Composite" and the DRI "Mode").
svn path=/nixos/trunk/; revision=18176
machine containing a replica (minus the state) of the system
configuration. This is mostly useful for testing configuration
changes prior to doing an actual "nixos-rebuild switch" (or even
"nixos-rebuild test"). The VM can be started as follows:
$ nixos-rebuild build-vm
$ ./result/bin/run-*-vm
which starts a KVM/QEMU instance. Additional QEMU options can be
passed through the QEMU_OPTS environment variable
(e.g. QEMU_OPTS="-redir tcp:8080::80" to forward a host port to the
guest). The fileSystem attribute of the regular system
configuration is ignored (using mkOverride), because obviously we
can't allow the VM to access the host's block devices. Instead, at
startup the VM creates an empty disk image in ./<hostname>.qcow2 to
store the VM's root filesystem.
Building a VM in this way is efficient because the VM shares its Nix
store with the host (through a CIFS mount). However, because the
Nix store of the host is mounted read-only in the guest, you cannot
run Nix build actions inside the VM. Therefore the VM can only be
reconfigured by re-running "nixos-rebuild build-vm" on the host and
restarting the VM.
svn path=/nixos/trunk/; revision=16662
broken httpd.conf to be generated. We should really have a merge
function that appends newlines to every value of options like
services.httpd.extraConfig.
svn path=/nixos/branches/modular-nixos/; revision=16404