Monday, June 20, 2016

Using Disnix as a remote package deployer

Recently, I was asked whether it is possible to use Disnix as a tool for remote package deployment.

As described in a number of earlier blog posts, Disnix's primary purpose is not remote (or distributed) package management, but deploying systems that can be decomposed into services to networks of machines. To deploy these kinds of systems, Disnix executes all required deployment activities, including building services from source code, distributing them to target machines in the network and activating or deactivating them.

However, a service deployment process is basically a superset of an "ordinary" package deployment process. In this blog post, I will describe how we can do remote package deployment by instructing Disnix to only use a relevant subset of features.

Specifying packages as services


In the Nix packages collection, it is a common habit to write each package specification as a function in which the parameters denote the (local) build and runtime dependencies (something that Disnix's manual refers to as intra-dependencies) that the package needs. The remainder of the function describes how to build the package from source code and its provided dependencies.

Disnix has adopted this habit and extended this convention to services. The main difference between Nix package expressions and Disnix service expressions is that the latter also take inter-dependencies into account that refer to run-time dependencies on services that may have been deployed to other machines in the network. For services that have no inter-dependencies, a Disnix expression is identical to an ordinary package expression.

This means that, for example, an expression for a package such as the Midnight Commander is also a valid Disnix service with no inter-dependencies:

{ stdenv, fetchurl, pkgconfig, glib, gpm, file, e2fsprogs
, libX11, libICE, perl, zip, unzip, gettext, slang
}:

stdenv.mkDerivation {
  name = "mc-4.8.12";
  
  src = fetchurl {
    url = http://www.midnight-commander.org/downloads/mc-4.8.12.tar.bz2;
    sha256 = "15lkwcis0labshq9k8c2fqdwv8az2c87qpdqwp5p31s8gb1gqm0h";
  };
  
  buildInputs = [ pkgconfig perl glib gpm slang zip unzip file gettext
      libX11 libICE e2fsprogs ];

  meta = {
    description = "File Manager and User Shell for the GNU Project";
    homepage = http://www.midnight-commander.org;
    license = "GPLv2+";
    maintainers = [ stdenv.lib.maintainers.sander ];
  };
}

Composing packages locally


Package and service expressions are functions that do not specify the versions or variants of the dependencies that should be used. To allow services to be deployed, we must compose them by providing the desired versions or variants of the dependencies as function parameters.

As with ordinary Nix packages, Disnix has also adopted this convention for services. In addition, we have to compose a Disnix service twice -- first its intra-dependencies and later its inter-dependencies.

Intra-dependency composition in Disnix is done in a similar way as in the Nix packages collection:

{pkgs, system}:

let
  callPackage = pkgs.lib.callPackageWith (pkgs // self);

  self = {
    pkgconfig = callPackage ./pkgs/pkgconfig { };
  
    gpm = callPackage ./pkgs/gpm { };
  
    mc = callPackage ./pkgs/mc { };
  };
in
self

The above expression (custom-packages.nix) composes the Midnight Commander package by providing its intra-dependencies as function parameters. The third attribute (mc) invokes a function named: callPackage {} that imports the previous package expression and automatically provides the parameters having the same names as the function parameters.

The callPackage { } function first consults the self attribute set (that composes some of Midnight Commander's dependencies as well, such as gpm and pkgconfig) and then any package from the Nixpkgs repository.

Writing a minimal services model


Previously, we have shown how to build packages from source code and its dependencies, and how to compose packages locally. For the deployment of services, more information is needed. For example, we need to compose their inter-dependencies so that services know how to reach them.

Furthermore, Disnix's end objective is to get a running service-oriented system and carries out extra deployment activities for services to accomplish this, such as activation and deactivation. The latter two steps are executed by a Dysnomia plugin that is determined by annotating a service with a type attribute.

For package deployment, specifying these extra attributes and executing these remaining activities are in principle not required. Nonetheless, we still need to provide a minimal services model so that Disnix knows which units can be deployed.

Exposing the Midnight Commander package as a service, can be done as follows:

{pkgs, system, distribution, invDistribution}:

let
  customPkgs = import ./custom-packages.nix {
    inherit pkgs system;
  };
in
{
  mc = {
    name = "mc";
    pkg = customPkgs.mc;
    type = "package";
  };
}

In the above expression, we import our intra-dependency composition expression (custom-packages.nix), and we use the pkg sub attribute to refer to the intra-dependency composition of the Midnight Commander. We annotate the Midnight Commander service with a package type to instruct Disnix that no additional deployment steps need to be performed beyond the installation of the package, such activation or deactivation.

Since the above pattern is common to all packages, we can also automatically generate services for any package in the composition expression:

{pkgs, system, distribution, invDistribution}:

let
  customPkgs = import ./custom-packages.nix {
    inherit pkgs system;
  };
in
pkgs.lib.mapAttrs (name: pkg: {
  inherit name pkg;
  type = "package";
}) customPkgs

The above services model exposes all packages in our composition expression as a service.

Configuring the remote machine's search paths


With the services models shown in the previous section, we have all ingredients available to deploy packages with Disnix. To allow users on the remote machines to conveniently access their packages, we must add Disnix' Nix profile to the PATH of a user on the remote machines:

$ export PATH=/nix/var/nix/profiles/disnix/default/bin:$PATH

When using NixOS, this variable can be extended by adding the following line to /etc/nixos/configuration.nix:

environment.variables.PATH = [ "/nix/var/nix/profiles/disnix/default/bin" ];

Deploying packages with Disnix


In addition to a services model, Disnix needs an infrastructure and distribution model to deploy packages. For example, we can define an infrastructure model that may look as follows:

{
  test1.properties.hostname = "test1";
  test2 = {
    properties.hostname = "test2";
    system = "x86_64-darwin";
  };
}

The above infrastructure model describes two machines that have hostname test1 and test2. Furthermore, machine test2 has a specific system architecture: x86_64-darwin that corresponds to a 64-bit Intel-based Mac OS X.

We can distribute packages to these two machines with the following distribution model:

{infrastructure}:

{
  gpm = [ infrastructure.test1 ];
  pkgconfig = [ infrastructure.test2 ];
  mc = [ infrastructure.test1 infrastructure.test2 ];
}

In the above distribution model, we distribute package gpm to machine test1, pkgconfig to machine test2 and mc to both machines.

When running the following command-line instruction:

$ disnix-env -s services.nix -i infrastructure.nix -d distribution.nix

Disnix executes all activities to get the packages in the distribution model deployed to the machines, such as building them from source code (including its dependencies), and distributing their dependency closures to the target machines.

Because machine test2 may have a different system architecture as the coordinator machine responsible for carrying out the deployment, Disnix can use Nix's delegation mechanism to forward a build to a machine that is capable of doing it.

Alternatively, packages can also be built on the target machines through Disnix:

$ disnix-env --build-on-targets \
  -s services.nix -i infrastructure.nix -d distribution.nix

After the deployment above command-line instructions have succeeded, we should be able to start the Midnight Commander on any of the target machines, by running:

$ mc

Deploying any package from the Nixpkgs repository


Besides deploying a custom set of packages, it is also possible to use Disnix to remotely deploy any package in the Nixpkgs repository, but doing so is a bit tricky.

The main challenge lies in the fact that the Nix packages set is a nested set of attributes, whereas Disnix expects services to be addressed in one attribute set only. Fortunately, the Nix expression language and Disnix models are flexible enough to implement a solution. For example, we can define a distribution model as follows:

{infrastructure}:

{
  mc = [ infrastructure.test1 ];
  git = [ infrastructure.test1 ];
  wget = [ infrastructure.test1 ];
  "xlibs.libX11" = [ infrastructure.test1 ];
}

Note that we use a dot notation: xlibs.libX11 as an attribute name to refer to libX11 that can only be referenced as a sub attribute in Nixpkgs.

We can write a services model that uses the attribute names in the distribution model to refer to the corresponding package in Nixpkgs:

{pkgs, system, distribution, invDistribution}:

pkgs.lib.mapAttrs (name: targets:
  let
    attrPath = pkgs.lib.splitString "." name;
  in
  { inherit name;
    pkg = pkgs.lib.attrByPath attrPath
      (throw "package: ${name} cannot be referenced in the package set")
      pkgs;
    type = "package";
  }
) distribution

With the above service model we can deploy any Nix package to any remote machine with Disnix.

Multi-user package management


Besides supporting single user installations, Nix also supports multi-user installations in which every user has its own private Nix profile with its own set of packages. With Disnix we can also manage multiple profiles. For example, by adding the --profile parameter, we can deploy another Nix profile that, for example, contains a set of packages for the user: sander:

$ disnix-env -s services.nix -i infrastructure.nix -d distribution.nix \
  --profile sander

The user: sander can access its own set of packages by setting the PATH environment variable to:

$ export PATH=/nix/var/nix/profiles/disnix/sander/bin:$PATH

Conclusion


Although Disnix has not been strictly designed for this purpose, I have described in this blog post how Disnix can be used as a remote package deployer by using a relevant subset of Disnix features.

Moreover, I now consider the underlying Disnix primitives to be mature enough. As such, I am announcing the release of Disnix 0.6!

Acknowledgements


I gained the inspiration for writing this blog post from discussions with Matthias Beyer on the #nixos IRC channel.

Saturday, June 11, 2016

Deploying containers with Disnix as primitives for multi-layered service deployments

As explained in an earlier blog post, Disnix is a service deployment tool that can only be used after a collection of machines have been predeployed providing a number of container services, such as a service manager (e.g. systemd), a DBMS (e.g. MySQL) or an application server (e.g. Apache Tomcat).

To deploy these machines, we need an external solution. Some solutions are:

  • Manual installations requiring somebody to obtain a few machines, manually installing operating systems (e.g. a Linux distribution), and finally installing all required software packages, such as Nix, Dysnomia, Disnix and any additional container services. Manually configuring a machine is typically tedious, time consuming and error prone.
  • NixOps. NixOps is capable of automatically instantiating networks of virtual machines in the cloud (such as Amazon EC2) and deploying entire NixOS system configurations to them. These NixOS configurations can be used to automatically deploy Dysnomia, Disnix and any container service that we need. A drawback is that NixOps is NixOS-based and not really useful if you want to deploy services to machines running different kinds of operating systems.
  • disnixos-deploy-network. In a Disnix-context, services are basically undefined units of deployment, and we can also automatically deploy entire NixOS configurations to target machines as services. A major drawback of this approach is that we require predeployed machines running Disnix first.

Although there are several ways to manage the underlying infrastructure of services, they are basically all or nothing solutions with regards to automation -- we either have to manually deploy entire machine configurations ourselves or we are stuck to a NixOS-based solution that completely automates it.

In some scenarios (e.g. when it is desired to deploy services to non-Linux operating systems), the initial deployment phase becomes quite tedious. For example, it took me quite a bit of effort to set up the heterogeneous network deployment demo I have given at NixCon2015.

In this blog post, I will describe an approach that serves as an in-between solution -- since services in a Disnix-context can be (almost) any kind of deployment unit, we can also use Disnix to deploy container configurations as services. These container services can also be deployed to non-NixOS systems, which means that we can alleviate the effort in setting up the initial target system configurations where Disnix can deploy services to.

Deploying containers as services with Disnix


As with services, containers in a Disnix-context could take any form. For example, in addition to MySQL databases (that we can deploy as services with Disnix), we can also deploy the corresponding container: the MySQL DBMS server, as a Disnix service:

{ stdenv, mysql, dysnomia
, name ? "mysql-database"
, mysqlUsername ? "root", mysqlPassword ? "secret"
, user ? "mysql-database", group ? "mysql-database"
}:

stdenv.mkDerivation {
  inherit name;
  
  buildCommand = ''
    mkdir -p $out/bin
      
    # Create wrapper script
    cat > $out/bin/wrapper <<EOF
    #! ${stdenv.shell} -e
      
    case "\$1" in
        activate)
            # Create group, user and the initial database if it does not exists
            # ...

            # Run the MySQL server
            ${mysql}/bin/mysqld_safe --user=${user} --datadir=${dataDir} --basedir=${mysql} --pid-file=${pidDir}/mysqld.pid &
            
            # Change root password
            # ...
            ;;
        deactivate)
            ${mysql}/bin/mysqladmin -u ${mysqlUsername} -p "${mysqlPassword}" -p shutdown
            
            # Delete the user and group
            # ...
            ;;
    esac
    EOF
    
    chmod +x $out/bin/wrapper

    # Add Dysnomia container configuration file for the MySQL DBMS
    mkdir -p $out/etc/dysnomia/containers

    cat > $out/etc/dysnomia/containers/${name} <<EOF
    mysqlUsername="${mysqlUsername}"
    mysqlPassword="${mysqlPassword}"
    EOF
    
    # Copy the Dysnomia module that manages MySQL databases
    mkdir -p $out/etc/dysnomia/modules
    cp ${dysnomia}/libexec/dysnomia/mysql-database $out/etc/dysnomia/modules
  '';
}

The above code fragment is a simplified Disnix expression that can be used to deploy a MySQL server. The above expression produces a wrapper script, which carries out a set of deployment activities invoked by Disnix:

  • On activation, the wrapper script starts the MySQL server by spawning the mysqld_safe daemon process in background mode. Before starting the daemon, it also intitializes some of the server's state, such as creating user accounts under which the daemon runs and setting up the system database if it does not exists (these steps are left out of the example for simplicity reasons).
  • On deactivation it shuts down the MySQL server and removes some of the attached state, such as the user accounts.

Besides composing a wrapper script, we must allow Dysnomia (and Disnix) to deploy databases as Disnix services to the MySQL server that we have just deployed:

  • We generate a Dysnomia container configuration file with the MySQL server settings to allow a database (that gets deployed as a service) to know what credentials it should use to connect to the database.
  • We bundle a Dysnomia plugin module that implements the deployment activities for MySQL databases, such as activation and deactivation. Because Dysnomia offers this plugin as part of its software distribution, we make a copy of it, but we could also compose our own plugin from scratch.

With the earlier shown Disnix expression, we can define the MySQL server as a service in a Disnix services model:

mysql-database = {
  name = "mysql-database";
  pkg = customPkgs.mysql-database;
  dependsOn = {};
  type = "wrapper";
};

and distribute it to a target machine in the network by adding an entry to the distribution model:

mysql-database = [ infrastructure.test2 ];

Configuring Disnix and Dysnomia


Once we have deployed containers as Disnix services, Disnix (and Dysnomia) must know about their availability so that we can deploy services to these recently deployed containers.

Each time Disnix has successfully deployed a configuration, it generates Nix profiles on the target machines in which the contents of all services can be accessed from a single location. This means that we can simply extend Dysnomia's module and container search paths:

export DYSNOMIA_MODULES_PATH=$DYSNOMIA_MODULES_PATH:/nix/var/nix/profiles/disnix/containers/etc/dysnomia/modules
export DYSNOMIA_CONTAINERS_PATH=$DYSNOMIA_CONTAINERS_PATH:/nix/var/nix/profiles/disnix/containers/etc/dysnomia/containers

with the paths to the Disnix profiles that have containers deployed.

A simple example scenario


I have modified the Java variant of the ridiculous Disnix StaffTracker example to support a deployment scenario with containers as Disnix services.

First, we need to start with a collection of machines having a very basic configuration without any additional containers. The StaffTracker package contains a bare network configuration that we can deploy with NixOps, as follows:

$ nixops create ./network-bare.nix ./network-virtualbox.nix -d vbox
$ nixops deploy -d vbox

By configuring the following environment variables, we can connect Disnix to the machines in the network that we have just deployed with NixOps:

$ export NIXOPS_DEPLOYMENT=vbox
$ export DISNIX_CLIENT_INTERFACE=disnix-nixops-client

We can write a very simple bootstrap infrastructure model (infrastructure-bootstrap.nix), to dynamically capture the configuration of the target machines:

{
  test1.properties.hostname = "test1";
  test2.properties.hostname = "test2";
}

Running the following command:

$ disnix-capture-infra infrastructure-bootstrap.nix > infrastructure-bare.nix

yields an infrastructure model (infrastructure-containers.nix) that may have the following structure:

{
  "test1" = {
    properties = {
      "hostname" = "test1";
      "system" = "x86_64-linux";
    };
    containers = {
      process = {
      };
      wrapper = {
      };
    };
    "system" = "x86_64-linux";
  };
  "test2" = {
    properties = {
      "hostname" = "test2";
      "system" = "x86_64-linux";
    };
    containers = {
      process = {
      };
      wrapper = {
      };
    };
    "system" = "x86_64-linux";
  };
}

As may be observed in the captured infrastructure model shown above, we have a very minimal configuration only hosting the process and wrapper containers, that integrate with host system's service manager, such as systemd.

We can deploy a Disnix configuration having Apache Tomcat and the MySQL DBMS as services, by running:

$ disnix-env -s services-containers.nix \
  -i infrastructure-bare.nix \
  -d distribution-containers.nix \
  --profile containers

Note that we have provided an extra parameter to Disnix: --profile to isolate the containers from the default deployment environment. If the above command succeeds, we have a deployment architecture that looks as follows:


Both machines have Apache Tomcat deployed as a service and machine test2 also runs a MySQL server.

When capturing the target machines' configurations again:

$ disnix-capture-infra infrastructure-bare.nix > infrastructure-containers.nix

we will receive an infrastructure model (infrastructure-containers.nix) that may have the following structure:

{
  "test1" = {
    properties = {
      "hostname" = "test1";
      "system" = "x86_64-linux";
    };
    containers = {
      tomcat-webapplication = {
        "tomcatPort" = "8080";
      };
      process = {
      };
      wrapper = {
      };
    };
    "system" = "x86_64-linux";
  };
  "test2" = {
    properties = {
      "hostname" = "test2";
      "system" = "x86_64-linux";
    };
    containers = {
      mysql-database = {
        "mysqlUsername" = "root";
        "mysqlPassword" = "secret";
        "mysqlPort" = "3306";
      };
      tomcat-webapplication = {
        "tomcatPort" = "8080";
      };
      process = {
      };
      wrapper = {
      };
    };
    "system" = "x86_64-linux";
  };
}

As may be observed in the above infrastructure model, both machines provide a tomcat-webapplication container exposing the TCP port number that the Apache Tomcat server has been bound to. Machine test2 exposes the mysql-database container with its connectivity settings.

We can now deploy the StaffTracker system (that consists of multiple MySQL databases and Apache Tomcat web applications) by running:

$ disnix-env -s services.nix \
  -i infrastructure-containers.nix \
  -d distribution.nix \
  --profile services

Note that I use a different --profile parameter, to tell Disnix that the StaffTracker components belong to a different environment than the containers. If I would use --profile containers again, Disnix will undeploy the previously shown containers environment with the MySQL DBMS and Apache Tomcat and deploy the databases and web applications, which will lead to a failure.

If the above command succeeds, we have the following deployment architecture:


The result is that we have all the service components of the StaffTracker example deployed to containers that are also deployed by Disnix.

An advanced example scenario: multi-containers


We could go even one step beyond the example I have shown in the previous section. In the first example, we deploy no more than one instance of each container to a machine in the network -- this is quite common, as it rarely happens that you want to run two MySQL or Apache Tomcat servers on a single machine. Most Linux distributions (including NixOS) do not support deploying multiple instances of system services out of the box.

However, with a few relatively simple modifications to the Disnix expressions of the MySQL DBMS and Apache Tomcat services, it becomes possible to allow multiple instances to co-exist on the same machine. What we basically have to do is identifying the conflicting runtime resources, making them configurable and changing their values in such a way that they no longer conflict.

{ stdenv, mysql, dysnomia
, name ? "mysql-database"
, mysqlUsername ? "root", mysqlPassword ? "secret"
, user ? "mysql-database", group ? "mysql-database"
, dataDir ? "/var/db/mysql", pidDir ? "/run/mysqld"
, port ? 3306
}:

stdenv.mkDerivation {
  inherit name;
  
  buildCommand = ''
    mkdir -p $out/bin
    
    # Create wrapper script
    cat > $out/bin/wrapper <<EOF
    #! ${stdenv.shell} -e
       
    case "\$1" in
        activate)
            # Create group, user and the initial database if it does not exists
            # ...

            # Run the MySQL server
            ${mysql}/bin/mysqld_safe --port=${toString port} --user=${user} --datadir=${dataDir} --basedir=${mysql} --pid-file=${pidDir}/mysqld.pid --socket=${pidDir}/mysqld.sock &
            
            # Change root password
            # ...
            ;;
        deactivate)
            ${mysql}/bin/mysqladmin --socket=${pidDir}/mysqld.sock -u ${mysqlUsername} -p "${mysqlPassword}" -p shutdown
            
            # Delete the user and group
            # ...
            ;;
    esac
    EOF
    
    chmod +x $out/bin/wrapper
  
    # Add Dysnomia container configuration file for the MySQL DBMS
    mkdir -p $out/etc/dysnomia/containers
    
    cat > $out/etc/dysnomia/containers/${name} <<EOF
    mysqlUsername="${mysqlUsername}"
    mysqlPassword="${mysqlPassword}"
    mysqlPort=${toString port}
    mysqlSocket=${pidDir}/mysqld.sock
    EOF
    
    # Copy the Dysnomia module that manages MySQL databases
    mkdir -p $out/etc/dysnomia/modules
    cp ${dysnomia}/libexec/dysnomia/mysql-database $out/etc/dysnomia/modules
  '';
}

For example, I have revised the MySQL server Disnix expression with additional parameters that change the TCP port the service binds to, the UNIX domain socket that is used by the administration utilities and the filesystem location where the databases are stored. Moreover, these additional configuration properties are also exposed by the Dysnomia container configuration file.

These additional parameters make it possible to define multiple variants of container services in the services model:

{distribution, invDistribution, system, pkgs}:

let
  customPkgs = import ../top-level/all-packages.nix {
    inherit system pkgs;
  };
in
rec {
  mysql-production = {
    name = "mysql-production";
    pkg = customPkgs.mysql-production;
    dependsOn = {};
    type = "wrapper";
  };
  
  mysql-test = {
    name = "mysql-test";
    pkg = customPkgs.mysql-test;
    dependsOn = {};
    type = "wrapper";
  };
  
  tomcat-production = {
    name = "tomcat-production";
    pkg = customPkgs.tomcat-production;
    dependsOn = {};
    type = "wrapper";
  };
  
  tomcat-test = {
    name = "tomcat-test";
    pkg = customPkgs.tomcat-test;
    dependsOn = {};
    type = "wrapper";
  };
}

I can, for example, map two MySQL DBMS instances and the two Apache Tomcat servers to the same machines in the distribution model:

{infrastructure}:

{
  mysql-production = [ infrastructure.test1 ];
  mysql-test = [ infrastructure.test1 ];
  tomcat-production = [ infrastructure.test2 ];
  tomcat-test = [ infrastructure.test2 ];
}

Deploying the above configuration:

$ disnix-env -s services-multicontainers.nix \
  -i infrastructure-bare.nix \
  -d distribution-multicontainers.nix \
  --profile containers

yields the following deployment architecture:


As can be observed, we have two instances of the same container hosted on the same machine. When capturing the configuration:

$ disnix-capture-infra infrastructure-bare.nix > infrastructure-multicontainers.nix

we will receive a Nix expression that may look as follows:

{
  "test1" = {
    properties = {
      "hostname" = "test1";
      "system" = "x86_64-linux";
    };
    containers = {
      mysql-production = {
        "mysqlUsername" = "root";
        "mysqlPassword" = "secret";
        "mysqlPort" = "3306";
        "mysqlSocket" = "/run/mysqld-production/mysqld.sock";
      };
      mysql-test = {
        "mysqlUsername" = "root";
        "mysqlPassword" = "secret";
        "mysqlPort" = "3307";
        "mysqlSocket" = "/run/mysqld-test/mysqld.sock";
      };
      process = {
      };
      wrapper = {
      };
    };
    "system" = "x86_64-linux";
  };
  "test2" = {
    properties = {
      "hostname" = "test2";
      "system" = "x86_64-linux";
    };
    containers = {
      tomcat-production = {
        "tomcatPort" = "8080";
        "catalinaBaseDir" = "/var/tomcat-production";
      };
      tomcat-test = {
        "tomcatPort" = "8081";
        "catalinaBaseDir" = "/var/tomcat-test";
      };
      process = {
      };
      wrapper = {
      };
    };
    "system" = "x86_64-linux";
  };
}

In the above expression, there are two instances of MySQL and Apache Tomcat deployed to the same machine. These containers have their resources configured in such a way that they do not conflict. For example, both MySQL instances bind to a different TCP ports (3306 and 3307) and different UNIX domain sockets (/run/mysqld-production/mysqld.sock and /run/mysqld-test/mysqld.sock).

After deploying the containers, we can also deploy the StaffTracker components (databases and web applications) to them. As described in my previous blog post, we can use an alternative (and more verbose) notation in the distribution model to directly map services to containers:

{infrastructure}:

{
  GeolocationService = {
    targets = [
      { target = infrastructure.test2; container = "tomcat-test"; }
    ];
  };
  RoomService = {
    targets = [
      { target = infrastructure.test2; container = "tomcat-production"; }
    ];
  };
  StaffService = {
    targets = [
      { target = infrastructure.test2; container = "tomcat-test"; }
    ];
  };
  StaffTracker = {
    targets = [
      { target = infrastructure.test2; container = "tomcat-production"; }
    ];
  };
  ZipcodeService = {
    targets = [
      { target = infrastructure.test2; container = "tomcat-test"; }
    ];
  };
  rooms = {
    targets = [
      { target = infrastructure.test1; container = "mysql-production"; }
    ];
  };
  staff = {
    targets = [
      { target = infrastructure.test1; container = "mysql-test"; }
    ];
  };
  zipcodes = {
    targets = [
      { target = infrastructure.test1; container = "mysql-production"; }
    ];
  };
}

As may be observed in the distribution model above, we deploy databases and web application to both instances that are hosted on the same machine.

We can deploy the services of which the StaffTracker consists, as follows:

$ disnix-env -s services.nix \
  -i infrastructure-multicontainers.nix \
  -d distribution-advanced.nix \
  --profile services

and the result is the following deployment architecture:


As may be observed in the picture above, we now have a running StaffTracker system that uses two MySQL and two Apache Tomcat servers on one machine. Isn't it awesome? :-)

Conclusion


In this blog post, I have demonstrated an approach in which we deploy containers as services with Disnix. Containers serve as potential deployment targets for other Disnix services.

Previously, we only had NixOS-based solutions to manage the configuration of containers, which makes using Disnix on other platforms than NixOS painful, as the containers had to be deployed manually. The approach described in this blog post serves as an in-between solution.

In theory, the process in which we deploy containers as services first followed by the "actual" services, could be generalized and extended into a layered service deployment model, with a new tool automating the process and declarative specifications capturing the properties of the layers.

However, I have decided not to implement this new model any time soon for practical reasons -- in nearly all of my experiences with service deployment, I have almost never encountered the need to have more than two layers supported. The only exception I can think of is the deployment of Axis2 web services to an Axis2 container -- the Axis2 container is a Java web application that must be deployed to Apache Tomcat first, which in turn requires the presence of the Apache Tomcat server.

Availability


I have integrated the two container deployment examples into the Java variant of the StaffTracker example.

The new concepts described in this blog post are part of the development version of Disnix and will become available in the next release.