Wednesday, April 10, 2013

Setting up a Hydra build cluster for continuous integration and testing (part 2)

In the previous blog post, I have described Hydra -- a Nix-based continuous integration server and I have given an incomplete tour of its features.

In order to be able to use Hydra, we have to set up a build cluster consisting of one or more build machines. In this blog post, I will describe how I have set up a Hydra cluster of three machines.

Prerequisites


To set up a cluster, we need two kinds of machines:

  • We need a build coordinator, with the Nix package manager installed running the three Hydra components: evaluator, queue runner and server. Hydra can be installed on any Linux distribution, but it's more convenient to use NixOS as it provides all the configuration steps as a NixOS module. Other distributions require more manual installation and configuration steps.
  • We need one or more build slaves, or we have to use the coordinator machine as build slave. Various types of build slaves can be used, such as machines running Linux, Mac OS X, FreeBSD and Cygwin.

I have set up 3 machines, consisting of a Linux coordinator, Linux build slave and a Mac OS X build slave, which I will describe in the next sections.

Installing a NixOS build slave


To set up a build slave (regardless of the operating system that we want to use), we need have to install two system services -- we need the Nix package manager and the OpenSSH server so that it can be remotely accessed from the build coordinator.

Setting up a Linux build slave running NixOS is straightforward. I have used the NixOS Live CD to install NixOS. After booting from the Live CD, I first had to configure my harddrive partitions:
$ fdisk /dev/sda
I have created a swap partition (/dev/sda1) and root partition (/dev/sda2). Then I had to initialize the filesystems:
$ mkswap -L nixosswap /dev/sda1
$ mke2fs -j -L nixos /dev/sda2
Then I had to mount the root partition on /mnt:
$ mount LABEL=nixos /mnt
And I have created a NixOS configuration.nix file and stored it in /mnt/etc/nixos/configuration.nix. My NixOS configuration looks roughly as follows:
{pkgs, ...}:

{
  boot.initrd.kernelModules = [ "uhci_hcd" "ehci_hcd" "ata_piix" ];

  nix.maxJobs = 2;
    
  boot.loader.grub.enable = true;
  boot.loader.grub.version = 2;
  boot.loader.grub.device = "/dev/sda";

  networking.hostName = "i686linux";

  fileSystems."/".device = "/dev/disk/by-label/nixos";

  swapDevices =
    [ { device = "/dev/disk/by-label/nixosswap"; } ];

  services.openssh.enable = true;
}

By running the following instruction NixOS gets installed, taking care of downloading/installing all required packages and composing the entire system configuration:
$ nixos-install
After the installation has succeeded, we can reboot the machine and boot into our freshly installed NixOS installation. The new installation has a root user account without any password. Therefore, it's smart to change the root password to something that's slightly more difficult to guess:
$ passwd
And then the installation is done :-). However, apart from the steps that I have described, it may also be convenient to add a non-privileged user account and install some administration tools.

Upgrading the NixOS installation can be done by running:
$ nixos-rebuild --upgrade switch
The above command-line instruction fetches the latest channel expressions and manifests containing the latest releases of the packages, and then rebuilds the entire system configuration and finally activates it.

As a sidenote: we can also use ordinary Linux distributions as build slaves, but this requires more manual installation and configuration, especially if you want to use it's more advanced features, such as multi-user builds. Moreover, since NixOS is almost a "pure system", it reduces the chances on side effects, which is a bit harder to guarantee with conventional Linux distributions.

Installing a Mac OS X build slave


Mac OS X was already pre-installed on our Mac machine, so I only had to set up a user account and perform some basic settings.

On the Mac OS X machine, I have to install the Nix package manager manually. To do this, I have obtained the Nix bootstrap binaries for x86_64-darwin (the system architecture identifier for a 64-bit Mac OS X machine) and installed it by running the following commands on the terminal:
$ sudo -i
# cd /
# tar xfvj /Users/sander/Downloads/nix-1.5.1-x86_64-darwin.tar.bz2
# chown -R sander /nix
# exit
$ nix-finish-install
By running the above command-line instructions, the /nix directory has been set up containing a Nix store with the Nix package manager and all its dependencies. The last command: nix-finish-install takes care of initializing the Nix database. We run this build as ordinary user, since we don't want to use the superuser for Nix installations.

If we add a Nix package to the user's profile, we also want it to be in the user's PATH, so that we can start a program without specifying its full path. I have appended the following line to both the user profile ~/.profile as well as ~/.bashrc:
$ cat >> ~/.profile <<EOF
source $HOME/.nix-profile/etc/profile.d/nix.sh
EOF
By adding the above code fragment to the user's shell profile, the Nix profile is appended to PATH allowing you to conveniently launch packages without specifying their full path including their hash-codes.

You may probably wonder why this line needs to be added to both .profile and .bashrc? The former case allows you to start packages when a login shell is used, e.g. by launching a terminal from the Mac OS X desktop. The latter case is needed for non-login shells. If the Hydra coordinator remotely executes a command-line instruction through SSH, then the shell is a non-login shell. If we don't add this line to .bashrc then we're unable to run the Nix package manager, because it's not in the PATH.

After performing the above steps, we can run a simple sanity check to see if the Nix package manager works as expected. The following instructions add a Nixpkgs channel and fetches the latest expressions and manifests:
$ nix-channel --add http://nixos.org/channels/nixpkgs-unstable
$ nix-channel --update
After updating the Nix channel, we should be able to install a Nix package into our profile, such as GNU Hello:
$ nix-env -i hello
And we should be able to run it from the command-line as the user's profile is supposed to be in our PATH:
$ hello
Hello, world!
After installing the Nix package manager, there may be some other desirable steps that must be performed. In order to build iOS apps or Mac OS X applications, Apple's Xcode needs to be installed, which must be done through Apple's App Store. We cannot use Nix for this purpose unfortunately. I have given some instructions in a previous blog posts about building iOS apps with the Nix package manager.

It may also be desired to build OpenGL applications for Mac OS X. To make this possible you need to manually install XQuartz first.

Finally, to be able use the Mac OS X machine as build slave we need to configure two other things. First, we must enable the SSH server so that the build machine can be remotely invoked from the coordinator machine. We need to open Mac OS X's system preferences for this, which can be found by clicking on the Apple logo and by picking 'System preferences':


The System preferences screen looks as follows:


By picking the 'Sharing' icon we can configure various services that makes the machine remotely accessible:

As can be observed from the above screenshot, we can enable remote SSH access by enabling the 'Remote login' option. Furthermore, we must configure the hostname to set it to something that we can remember.

Another issue is that we need to turn some power management settings off, because otherwise the Mac machine will turn standby after a while and cannot be used to perform builds. Power management settings can be adapted by picking 'Energy saver' from the System preferences screen, which will show you the following:

I have set the 'Computer sleep' time to 'Never' and I've disabled putting the harddisks to sleep.

Setting up the NixOS build coordinator machine


Then comes the most complicated part -- setting up the build coordinator machine. First, I performed a basic NixOS installation, which installation procedure is exactly the same as the NixOS build slave described earlier. After performing the basic installation, I have adapted its configuration, to turn it into a build coordinator.

Since Hydra is not part of the standard NixOS distribution, we have to obtain it ourselves from Git and store the code in a directory on the filesystem (such as the /root folder):
$ git clone https://github.com/NixOS/hydra.git
Then I have extended the machine's configuration, by adding a number of settings to the attribute set body of /etc/nixos/configuration.nix:

  • To be able to use Hydra's NixOS configuration properties, we must include the Hydra NixOS module:
    require = [ /root/hydra/hydra-module.nix ];
    
  • We must enable the Hydra server and configure some of its mandatory and optional properties:
    services.hydra = {
      enable = true;
      package = (import /root/hydra/release.nix {}).build {
        system = pkgs.stdenv.system;
      };
      logo = ./logo.png;
      dbi = "dbi:Pg:dbname=hydra;host=localhost;user=hydra;";
      hydraURL = "http://nixos";
      notificationSender = "yes@itsme.com";
    };
    

    In the above code fragment, hydra refers to the actual Hydra package, the logo is an optional parameter that can be used to show a logo in the web front-end's header, dbi is a Perl DBI database connection string, configured to make a localhost connection to a PostgreSQL database named: hydra using the hydra user, hydraURL contains the URL to the web front-end, and notificationSender contains the administrator's e-mail address.
  • In order to be able to delegate builds to build slaves for scalability and portability, we have to enable Nix's distributed builds feature:
    nix.distributedBuilds = true;
    nix.buildMachines = [
      { hostName = "i686linux";
        maxJobs = 2;
        sshKey = "/root/.ssh/id_buildfarm";
        sshUser = "root";
        system = "i686-linux";
      }
      
      { hostName = "macosx";
        maxJobs = 2;
        sshKey = "/root/.ssh/id_buildfarm";
        sshUser = "sander";
        system = "x86_64-darwin";
      }
    ];
    
    The above code fragment allows us to delegate 32-bit Linux builds to the NixOS build slave and 64-bit Mac OS X builds to the Mac OS X machine.

  • Hydra needs to store its data, such as projects, jobsets and builds into a database. For production use it's recommended to use PostgreSQL, which can be enabled by adding the following line to the configuration:
    services.postgresql.enable = true;
    
  • The Hydra server runs its own small webserver on TCP port 3000. In production environments, it's better to add a proxy in front of it. We can do this by adding the following Apache HTTP server configuration settings:
    httpd = {
      enable = true;
      adminAddr = "yes@itsme.com";
          
      extraConfig = ''
        <Proxy *>
        Order deny,allow
        Allow from all
        </Proxy>
            
        ProxyRequests     Off
        ProxyPreserveHost On
        ProxyPass         /    http://localhost:3000/ retry=5 disablereuse=on
        ProxyPassReverse  /    http://localhost:3000/
      '';
    };
    
  • To allow e-mail notifications to be sent, we must configure a default mail-server. For example, the following does direct delivery through sendmail:
    networking.defaultMailServer = {
      directDelivery = true;
      hostName = "nixos";
      domain = "nixos.local";
    };
    
  • As you may probably know from earlier blog posts, Nix always stores versions of components next to each other, and components never get overwritten or removed automatically. At some point we may run out of diskspace. Therefore, it's a good idea to enable garbage collection:
    nix.gc = {
      automatic = true;
      dates = "15 03 * * *";
    };
    
    services.cron = {
      enable = true;
          
      systemCronJobs =
        let
          gcRemote = { machine, gbFree ? 4, df ? "df" }:
            "15 03 * * *  root  ssh -x -i /root/.ssh/id_buildfarm ${machine} " +
            ''nix-store --gc --max-freed '$((${toString gbFree} * 1024**3 - 1024 * ''+
            ''$(${df} -P -k /nix/store | tail -n 1 | awk "{ print \$4 }")))' ''+
            ''> "/var/log/gc-${machine}.log" 2>&1'';
        in
        [ (gcRemote { machine = "root@i686linux"; gbFree = 50; })
          (gcRemote { machine = "sander@macosx"; gbFree = 50; })
        ];
    };
    

  • The nix.gc config attribute generates a cron job that runs the garbage collector service at 3:15 AM every night. The services.cron configuration also remotely connects to the build slave machines and runs the garbage collector if a certain threshold has been reached.

  • It may also be worth enabling some advanced features of Nix. For example, in our situation we have many large components that are very similar to each other consuming a lot of diskspace. It may be helpful to enable hard-link sharing, so that identical files are stored only once.

    Moreover, in our current configuration we also download substitutes from the NixOS' Hydra build instance, so that we don't have to build the complete Nixpkgs collection ourselves. It may also be disable this and take full control.

    Another interesting option is to enable chroot builds, reducing the chances on side effects even more:
    nix.extraOptions = ''
      auto-optimise-store = true
      build-use-substitutes = false
    '';
    nix.useChroot = true;
    
    The nix.conf manual page has more information about these extra options.


After adapting the coordinator's configuration.nix, we must activate it by running:
$ nixos-rebuild switch
The above command-line instruction downloads/installs all the required packages and generates all configuration files, such as the webserver and cron jobs.

After rebuilding, we still don't have a working Hydra instance yet. We must still set up its storage, by creating a PostgreSQL database and Hydra user. To do this, we must perform the following instructions as root user:
# createuser -S -D -R -P hydra
# createdb -O hydra hydra
By running the hydra-init job, we can setup its schema or migrate it to a new version:
# start hydra-init
Then we must create a configuration file that allows the unprivileged Hydra user to connect to it:
# su hydra
$ echo "localhost:*:hydra:hydra:password" > ~/.pgpass
$ chmod 600 ~/.pgpass
The .pgpass file contains the hostname, database, username and password which must be replaced by the user's real password, of course.

We also need to set up a user, as the user database is completely empty. The following will create an administration user named 'root' with password 'foobar':

$ hydra-create-user root --password foobar --role admin
And finally we can activate the three Hydra processes, which allows us to use it and to access the web front-end:
$ exit
# start hydra-{evaluator,queue-runner,server}

Setting up connections


We now have a running Hydra instance, but there is still one detail missing. In order to allow the coordinator to connect to the build slaves, we need SSH keys without passphrases allowing us to connect automatically. Generating a SSH keypair can be done as follows:
$ ssh-keygen -t rsa
The above command asks you a couple of questions. You have to keep in mind that we should not specify a passphrase.

Assuming that we have called the file: id_buildfarm, then we have to two files, a private key called: id_buildfarm and a public key called id_buildfarm.pub. We must copy the public key to all build slaves and run the following instruction on each client machine, under the user which performs the build (which is root on the Linux machine and sander on the Mac OS X machine):
$ cat id_buildfarm.pub >> ~/.ssh/authorized_keys
The above command adds the public key to a list of authorized keys, allowing the coordinator to connect to it with the private key.

After installing the public keys, we can try connecting to the build slaves from the coordinator through the private key, by running the following command as root user:
$ ssh -i ~/.ssh/id_buildfarm root@i686linux
If we run the above command, we should be able to connect to the machine without being asked for any credentials. Moreover, the first time that you connect a machine, the host key is added to the known_hosts list, which is necessary because otherwise we won't be able to connect.

Another issue that I have encountered with the Mac OS X machine is that it may stall connections after no input has been received from the coordinator for while. To remedy this, I added the following lines to the SSH config of the root user on the coordinator machine:
$ cat > /root/.ssh/config <<EOF
ServerAliveCountMax 3
ServerAliveInterval 60
EOF

Conclusion


In this blog post, I have described the steps that I have performed to set up a cluster of three Hydra machines consisting of two Linux machines and one Mac OS X machine. To get an impression on how Hydra can be used, I recommend users the read my previous blog post.

In the past, I have also set up some more "exotic" build slaves, such as the three BSDs: FreeBSD, OpenBSD, NetBSD and an OpenSolaris machine. To install these platforms, we can roughly repeat the procedure that I have done to install the Mac OS X build slave. First install the host OS itself, then the Nix package manager (there are bootstrap binaries for several platforms available or you must do a source install), and then set up SSH.

Friday, April 5, 2013

Setting up a Hydra build cluster for continuous integration and testing (part 1)

Recently, I have set up a continuous integration facility at work, because we think that there is a need for such a facility.

While developing software systems, there are all kinds of reasons that may (potentially) break something. For example, by committing a broken piece of source code to a master repository or by upgrading one its dependencies, such as a library, to an incompatible version.

All these potentially breaking changes may result in a product that behaves incorrectly, which may not be delivered in time, because there is usually a very large time window in which it's not known if the system still behaves correctly. In such cases, it is a quite an effort to get all bugs fixed (because the effort of fixing every bug is cumulative) and to predict how long it will take to deliver a new version.

A solution for this is continuous integration (CI) or daily builds -- a practice in which all components of a software system are integrated at least once every day (preferably more often) and automatically verified by an automated build solution reporting about the impact of these changes as quickly as possible. There are many automated integration solutions available, such as Buildbot, Jenkins, Tinderbox, CruiseControl, and Hydra.

Hydra: the Nix-based continuous integration server


As some of you may know, I have written many articles about Nix and their applications on this blog. One of the applications that I have mentioned a few times, but didn't cover in detail is Hydra, a Nix-based continuous integration facility.

For people that know me, it's obvious that I have picked Hydra, but apart from the fact that I'm familiar with it, there are a few other reasons to pick Hydra over other continuous integration solutions:

  • In Hydra, the build environments are managed, which is typically not the case with other CI solutions. In other CI solutions, dependencies happen to be already there and it's up to the system administrator and the tools that are invoked to ensure that they are present and correct. Since Hydra builds on top of Nix, it uses the Nix package manager to install all its required dependencies automatically and on-demand, such as compilers, linkers etc. Moreover, because of Nix's unique facilities, all dependencies are ensured to be present and package installations do not affect other packages, as they are stored safely in isolation from each other.

    Apart from building components, this is also extremely useful for system integration tests. The Nix package manager ensures that all the required prerequisites of a test are present and that they exactly conform to the versions that we need.

    Similarly, because in Nix every dependency is known, we can also safely garbage collect components that are no longer in use without having to worry that the components that are still in use are removed.
  • Hydra has strong support for specifying and managing variability. For us it's important to to be able to build a certain package for many kinds of platforms and various kinds of versions of the same platform. We don't want to accidentally use a wrong version.

    Since Hydra uses Nix to perform builds, dependencies can only be found when they are specified. For example, if we insist on using an older version of the GCC compiler, then Nix ensures that only the older compiler version can be found. This can be done for any build-time dependency, such as the linker, the Java runtime environment, libraries etc.

    With other CI solutions it's much harder to ensure that an already installed version of e.g. the GCC compiler does not conflict with a different version, as it's harder to safely install two versions next to each other and to ensure that the right version is used. For example, if the default compiler in the PATH is GCC 4.7, how can we be absolutely sure that we use an older version without invoking any of the newer compiler's components?
  • Hydra also has good facilities to perform builds and tests on multiple operating systems. When the Nix packager is requested to build something it always takes the requested architecture identifier into account. If the Nix package manager is unable to perform a build for the requested architecture on a given machine, it can delegate a build to another machine capable of building it, giving the caller the impression that the build is performed locally.
  • Another interesting feature is scalability. Since for every build the Nix package manager is used, that invokes functions that are referentially transparent (under normal circumstances) due to its underlying purely functional nature, we can easily divide the build of a package and their dependencies among machines in a network.

Hydra components


Hydra is composed of several components. Obviously, Nix is one of them and does the majority of the work -- it's used to take job definitions specified in the Nix expression language and to execute them in a reliable and reproducible manner (which is analogous to building packages through Nix). Moreover, it can delegate job executions to machines in a network.

Besides Nix, Hydra consists of three other major components:

  • The scheduler regularly checks out configured source code repositories (e.g. managed by a VCS, such as Git or Subversion) and evaluates the Nix expressions that define jobsets. Each derivation in a Nix expression (in many cases a direct or indirect function invocation to stdenv.mkDerivation) gets appended to the build queue. Every derivation corresponds to a job.
  • The queue runner builds the derivations that are queued by the evaluator through the Nix package manager.
  • The server component is a web application providing end-users a pretty interface which they can use to configure projects and jobsets and inspect the status of the builds.

Usage: Projects


The following screenshot shows the entry page of the Hydra web front-end showing an overview of projects. On the highest level, every job that Hydra executes belongs to a certain project:

Administrators are free to choose any name and description of a project. Projects can be created, edited and deleted through the Hydra web interface. In this example, I have defined four projects: Disnix (a tool part of the Nix project which I have developed), the Nix android test case, the Nix xcode test case, and KitchenSink -- a showcase example for the Titanium SDK, a cross-platform mobile development kit.

Clicking on a project will show you its jobsets.

Usage: Jobsets


Every project contains zero or more jobsets. A jobset contains one or more jobs that execute something, typically a build or a test case. The following screenshot shows you the jobsets that I have defined for the nix-androidenvtests project:


As we can see, we have defined only one jobset, which regularly checks out the master Git repository of the example case and builds it. Jobsets can be defined to build from any source code repository and can have any name. Quite frequently, I use jobsets to define sub projects or to make a distinction between branches of the same repository, e.g. it may be desirable to execute the same jobs on a specific experimental branch of a source code repository.

In the Disnix project, for example, I define jobsets for every sub project:


Besides a listing of jobsets, the screen also shows you their statuses. The green colored icons indicate how many jobs have succeeded, and the red colored icons show you the amount of jobs that have failed. Grey icons show the amount of jobs that are queued and still have to be executed.

To create jobsets, we have to perform two steps. First, we must define a jobset Nix expression and store it in a source code repository, such as a Git repository or on the local filesystem. The nix-androidenvtests-master jobset expression looks as follows:

{nixpkgs, system}:

let
  pkgs = import nixpkgs { inherit system; };
in
rec {
  myfirstapp_debug = import ./myfirstapp {
    inherit (pkgs) androidenv;
    release = false;
  };
  
  myfirstapp_release = import ./myfirstapp {
    inherit (pkgs) androidenv;
    release = true;
  };
  
  emulate_myfirstapp_debug = import ./emulate-myfirstapp {
    inherit (pkgs) androidenv;
    myfirstapp = myfirstapp_debug;
  };
  
  emulate_myfirstapp_release = import ./emulate-myfirstapp {
    inherit (pkgs) androidenv;
    myfirstapp = myfirstapp_release;
  };
}

The above Nix expression defines the nix-androidenvtests-master jobset, shown in the first jobsets screenshot. A jobset Nix expression is a function returning an attribute set, in which each attribute defines a job. Every job can be either a direct function invocation to a function that builds/executes something, or a function taking dependencies of the given job returning a function invocation that builds/execute something.

Functions in a jobset expression are used to define its variation points. For example, in our above expression we have only two of them -- the nixpkgs parameter refers to a checkout of the Nixpkgs repository, that contains a collection of common packages, such as compilers, linkers, libraries, end-user software etc. The system parameter refers to the system architecture that we want to build for, which can be (for example) 32-bit Linux machines, 64-bit Linux machines, 64-bit Mac OS X and so on.

Nix expressions defining packages and Nix expressions defining jobsets are quite similar. Both define a function (or functions) that build something from given function arguments. They both cannot be used to build something directly, but we must also compose the builds by calling them with the right function arguments. Function arguments specify (for example) which versions of packages we want to use and for what kind of system architectures we want to build. For ordinary Nix packages, this is done in the all-packages.nix composition expression. For jobsets, we use the Hydra web interface to configure them.

In the following screenshot, we configure the nix-androidenvtests jobset (done by picking the 'Create jobset' function from the Projects menu) and their parameters through the Hydra web interface:

As can be observed from the screenshot, we configure the nix-androidenvtests jobset as follows:

  • The identifier is a unique name identifying the jobset, and a description can be anything we want.
  • The Nix expression field defines what jobset Nix expression file we must use and in which input it is stored. The Nix expression that we have just shown, is stored inside the Nix android tests GitHub repository. Its relative path is deployment/default.nix. The name of the input that refers to the Git checkout named nixandroidenvtests, which is later declared in the build inputs section.
  • We can configure e-mail notifications so that you will be informed if a certain builds succeeds or fails.
  • We can also specify how many successful builds we want to keep. Older builds or failed builds will be automatically garbage collected once in a while.
  • In the remaining section of the screen, we can configure the jobset's build inputs, which basically provide parameters to the functions that we have seen in the jobset Nix expression.

    The first build input is a Git checkout of the master branch of the Android testcase. As we have seen earlier, this build input provides the Nix jobset expression. The second parameter provides a checkout of the Nixpkgs repository. In our case it's configured to take the last checkout of the master branch, but we can also configure it to take e.g. a specific branch. The third parameter specifies for which platforms we want to build. We have specified three values: 32-bit Linux (i686-linux), 64-bit Linux (x86_64-linux), and 64-bit Mac OS X (x86-64-darwin). If multiple values are specified for a specific input, Hydra will evaluate the Cartesian product of all inputs, meaning that (in this case) we build the same jobs for 32-bit + 64-bit Linux + 64-bit Mac OS X simultaneously.

The Android testcase is a relatively simple project with simple parameters. Hydra has many more available options. The following screenshot shows the jobset configuration of disnix-trunk:


In the above configuration, apart from strings and VCS checkouts, we use a number of other interesting parameter types:

  • The disnix jobset contains a job named tarball that checks out the source code and packages it in a compressed tarball, which is useful for source releases. To build the binary package, we use the tarball instead of the checkout directly.

    We can reuse the output of an existing Hydra job (named tarball) by setting a build input type to: 'Build output'. The build output is allowed to be of any platform type, which is not a problem as the tarball should always be the same.
  • We can also refer to a build of a different jobset, which we have done for disnix_activation_scripts, which a dependency of Disnix. Since we have to run that package, it must be built for the same system architecture, which can be forced by setting the type to: 'Build output (same system)'.

The queue


If we have defined a jobset and if it's enabled, then the evaluator should regularly check it, and their corresponding jobs should appear in the queue view (which can be accessed from the menu by picking Status -> Queue):

Inspecting job statuses


To inspect the build results, we can pick a jobset and open the 'Job status' tab. The following image shows the results of the nix-androidenvtests-master jobset:


As can observed from the screenshot, we have executed its jobs on three platforms and all the builds seem to have succeeded.

Obtaining build products


In addition to building jobs and performing testcases, it is also desired to obtain the build artifacts produced by Hydra so that we can run or inspect them locally. There are various ways to do this. The most obvious way is to click on a build result, that will redirect you its status page.

The following page shows you the result for the emulate_myfirstapp_release, a job that produces a script starting the Android emulator running the release version of the demo app:

By taking the url to which an one click install link points and by running the following instruction (Nix is required to be present on that machine):

$ nix-install-package --url http://nixos/build/169/nix/pkg/MyFirstApp-x86_64-linux.nixpkg

The package gets downloaded and imported in the Nix store of the machine and can be accessed under the same Nix store path as the status page shows (which is /nix/store/z8zn57...-MyFirstApp. By running the following command-line instruction:

$ /nix/store/z8zn57...-MyFirstApp/bin/run-test-emulator
We can automatically launch an emulator instance running the recently built app.

For mobile apps (that are typically packaged inside an APK bundle for Android or an IPA bundle for iOS), or documents (such as PDF files) it may be a bit inconvenient to fetch and import the remote builds into the Nix store of the local machine to view or to test them. Therefore, it is also possible to declare build products inside a build job.

For example, I have adapted the buildApp {} functions for Android and iOS apps, shown in my earlier blog posts, to declare the resulting bundle as build product, by appending the following instructions to the build procedure:

mkdir -p $out/nix-support
echo "file binary-dist \"$(echo $out/*.apk)\"" \
    > $out/nix-support/hydra-build-products

The above shell commands adds a nix-support/hydra-build-products file to the resulting package, which is a file that Hydra can parse. The first two columns define the type and subtype for the build product, the third column specifies the path to the build product.

As a result, building an App with Hydra produces the following status page:

As can be observed, the page provides a convenient download link that allows us to download the APK file. Another interesting benefit is that I can use my Android phone to browse to this result page and to install it automatically, saving me a lot of effort. ;)

The most powerful mechanism to obtain builds is probably the Nix channel mechanism. Every project and jobset have a channel, which info page can be accessed by selecting the 'Channel' option from the project or jobset menu. The following screenshot shows the Nix channel page for the nix-androidenvtests-master jobset:


The above page displays some instructions that users having the Nix package manager installed must follow. By adding the Nix channel and updating it, the user receives a collection of Nix expressions and a manifest with a list of binaries. By installing a package that's defined in the channel, the Nix package gets downloaded automatically and added to the user's Nix profile.

Experience


We are using Hydra for a short time now at Conference Compass, primarily for producing mobile apps for the Android and iPhone (which is obvious to people that know me). It's already showing us very promising results. To make this possible I'm using the Nix functions that I have described in earlier blog posts.

I have also worked with the Hydra developers to make some small changes to make my life more convenient. I have contributed a patch that allows build products with spaces in them (as this is common for Apps), and I have adapted/fixed the included NixOS configuration module so that Hydra can be conveniently installed in NixOS using a reusable module. I'd like to thank the Hydra developers: Eelco Dolstra, Rob Vermaas, Ludovic Courtès, and Shea Levy for their support.

Conclusion


In this blog post, I have given an incomplete tour of Hydra's features with some examples I care about. Apart from the features that I have described, Hydra has a few more facilities, such as creating views and releases and various management facilities. The main reason for me to write this blog post is to keep the explanation that I have given to my colleagues as a reference.

This blog post describes Hydra from a user/developer perspective, but I also had to deploy a cluster of build machines, which is also interesting to report about. In the next blog post, I will describe what I did to set up a Hydra cluster.

References


For people that are eager to try Hydra: Hydra is part of the Nix project and available as free and open-source software under the GPLv3 license.

There are also a number of publications available about Nix buildfarms. The earliest prototype is described in Eelco Dolstra's PhD thesis. Furthermore, on Eelco's publications page a few papers about the Nix buildfarm can be found.

Another thing that may be interesting to read about is an old blog of me titled: 'Using NixOS for declarative deployment and testing' describing how we can execute distributed system integration tests using NixOS. Hydra can be used to call NixOS' system integration test facilities. In fact, we already do this quite extensively to test the NixOS distribution itself. Moreover, Disnix also has a very comprehensive distributed integration test suite.