As described in a number of earlier blog posts, Disnix's primary purpose is not remote (or distributed) package management, but deploying systems that can be decomposed into services to networks of machines. To deploy these kinds of systems, Disnix executes all required deployment activities, including building services from source code, distributing them to target machines in the network and activating or deactivating them.
However, a service deployment process is basically a superset of an "ordinary" package deployment process. In this blog post, I will describe how we can do remote package deployment by instructing Disnix to only use a relevant subset of features.
Specifying packages as services
In the Nix packages collection, it is a common habit to write each package specification as a function in which the parameters denote the (local) build and runtime dependencies (something that Disnix's manual refers to as intra-dependencies) that the package needs. The remainder of the function describes how to build the package from source code and its provided dependencies.
Disnix has adopted this habit and extended this convention to services. The main difference between Nix package expressions and Disnix service expressions is that the latter also take inter-dependencies into account that refer to run-time dependencies on services that may have been deployed to other machines in the network. For services that have no inter-dependencies, a Disnix expression is identical to an ordinary package expression.
This means that, for example, an expression for a package such as the Midnight Commander is also a valid Disnix service with no inter-dependencies:
{ stdenv, fetchurl, pkgconfig, glib, gpm, file, e2fsprogs , libX11, libICE, perl, zip, unzip, gettext, slang }: stdenv.mkDerivation { name = "mc-4.8.12"; src = fetchurl { url = http://www.midnight-commander.org/downloads/mc-4.8.12.tar.bz2; sha256 = "15lkwcis0labshq9k8c2fqdwv8az2c87qpdqwp5p31s8gb1gqm0h"; }; buildInputs = [ pkgconfig perl glib gpm slang zip unzip file gettext libX11 libICE e2fsprogs ]; meta = { description = "File Manager and User Shell for the GNU Project"; homepage = http://www.midnight-commander.org; license = "GPLv2+"; maintainers = [ stdenv.lib.maintainers.sander ]; }; }
Composing packages locally
Package and service expressions are functions that do not specify the versions or variants of the dependencies that should be used. To allow services to be deployed, we must compose them by providing the desired versions or variants of the dependencies as function parameters.
As with ordinary Nix packages, Disnix has also adopted this convention for services. In addition, we have to compose a Disnix service twice -- first its intra-dependencies and later its inter-dependencies.
Intra-dependency composition in Disnix is done in a similar way as in the Nix packages collection:
{pkgs, system}: let callPackage = pkgs.lib.callPackageWith (pkgs // self); self = { pkgconfig = callPackage ./pkgs/pkgconfig { }; gpm = callPackage ./pkgs/gpm { }; mc = callPackage ./pkgs/mc { }; }; in self
The above expression (custom-packages.nix) composes the Midnight Commander package by providing its intra-dependencies as function parameters. The third attribute (mc) invokes a function named: callPackage {} that imports the previous package expression and automatically provides the parameters having the same names as the function parameters.
The callPackage { } function first consults the self attribute set (that composes some of Midnight Commander's dependencies as well, such as gpm and pkgconfig) and then any package from the Nixpkgs repository.
Writing a minimal services model
Previously, we have shown how to build packages from source code and its dependencies, and how to compose packages locally. For the deployment of services, more information is needed. For example, we need to compose their inter-dependencies so that services know how to reach them.
Furthermore, Disnix's end objective is to get a running service-oriented system and carries out extra deployment activities for services to accomplish this, such as activation and deactivation. The latter two steps are executed by a Dysnomia plugin that is determined by annotating a service with a type attribute.
For package deployment, specifying these extra attributes and executing these remaining activities are in principle not required. Nonetheless, we still need to provide a minimal services model so that Disnix knows which units can be deployed.
Exposing the Midnight Commander package as a service, can be done as follows:
{pkgs, system, distribution, invDistribution}: let customPkgs = import ./custom-packages.nix { inherit pkgs system; }; in { mc = { name = "mc"; pkg = customPkgs.mc; type = "package"; }; }
In the above expression, we import our intra-dependency composition expression (custom-packages.nix), and we use the pkg sub attribute to refer to the intra-dependency composition of the Midnight Commander. We annotate the Midnight Commander service with a package type to instruct Disnix that no additional deployment steps need to be performed beyond the installation of the package, such activation or deactivation.
Since the above pattern is common to all packages, we can also automatically generate services for any package in the composition expression:
{pkgs, system, distribution, invDistribution}: let customPkgs = import ./custom-packages.nix { inherit pkgs system; }; in pkgs.lib.mapAttrs (name: pkg: { inherit name pkg; type = "package"; }) customPkgs
The above services model exposes all packages in our composition expression as a service.
Configuring the remote machine's search paths
With the services models shown in the previous section, we have all ingredients available to deploy packages with Disnix. To allow users on the remote machines to conveniently access their packages, we must add Disnix' Nix profile to the PATH of a user on the remote machines:
$ export PATH=/nix/var/nix/profiles/disnix/default/bin:$PATH
When using NixOS, this variable can be extended by adding the following line to /etc/nixos/configuration.nix:
environment.variables.PATH = [ "/nix/var/nix/profiles/disnix/default/bin" ];
Deploying packages with Disnix
In addition to a services model, Disnix needs an infrastructure and distribution model to deploy packages. For example, we can define an infrastructure model that may look as follows:
{ test1.properties.hostname = "test1"; test2 = { properties.hostname = "test2"; system = "x86_64-darwin"; }; }
The above infrastructure model describes two machines that have hostname test1 and test2. Furthermore, machine test2 has a specific system architecture: x86_64-darwin that corresponds to a 64-bit Intel-based Mac OS X.
We can distribute packages to these two machines with the following distribution model:
{infrastructure}: { gpm = [ infrastructure.test1 ]; pkgconfig = [ infrastructure.test2 ]; mc = [ infrastructure.test1 infrastructure.test2 ]; }
In the above distribution model, we distribute package gpm to machine test1, pkgconfig to machine test2 and mc to both machines.
When running the following command-line instruction:
$ disnix-env -s services.nix -i infrastructure.nix -d distribution.nix
Disnix executes all activities to get the packages in the distribution model deployed to the machines, such as building them from source code (including its dependencies), and distributing their dependency closures to the target machines.
Because machine test2 may have a different system architecture as the coordinator machine responsible for carrying out the deployment, Disnix can use Nix's delegation mechanism to forward a build to a machine that is capable of doing it.
Alternatively, packages can also be built on the target machines through Disnix:
$ disnix-env --build-on-targets \ -s services.nix -i infrastructure.nix -d distribution.nix
After the deployment above command-line instructions have succeeded, we should be able to start the Midnight Commander on any of the target machines, by running:
$ mc
Deploying any package from the Nixpkgs repository
Besides deploying a custom set of packages, it is also possible to use Disnix to remotely deploy any package in the Nixpkgs repository, but doing so is a bit tricky.
The main challenge lies in the fact that the Nix packages set is a nested set of attributes, whereas Disnix expects services to be addressed in one attribute set only. Fortunately, the Nix expression language and Disnix models are flexible enough to implement a solution. For example, we can define a distribution model as follows:
{infrastructure}: { mc = [ infrastructure.test1 ]; git = [ infrastructure.test1 ]; wget = [ infrastructure.test1 ]; "xlibs.libX11" = [ infrastructure.test1 ]; }
Note that we use a dot notation: xlibs.libX11 as an attribute name to refer to libX11 that can only be referenced as a sub attribute in Nixpkgs.
We can write a services model that uses the attribute names in the distribution model to refer to the corresponding package in Nixpkgs:
{pkgs, system, distribution, invDistribution}: pkgs.lib.mapAttrs (name: targets: let attrPath = pkgs.lib.splitString "." name; in { inherit name; pkg = pkgs.lib.attrByPath attrPath (throw "package: ${name} cannot be referenced in the package set") pkgs; type = "package"; } ) distribution
With the above service model we can deploy any Nix package to any remote machine with Disnix.
Multi-user package management
Besides supporting single user installations, Nix also supports multi-user installations in which every user has its own private Nix profile with its own set of packages. With Disnix we can also manage multiple profiles. For example, by adding the --profile parameter, we can deploy another Nix profile that, for example, contains a set of packages for the user: sander:
$ disnix-env -s services.nix -i infrastructure.nix -d distribution.nix \ --profile sander
The user: sander can access its own set of packages by setting the PATH environment variable to:
$ export PATH=/nix/var/nix/profiles/disnix/sander/bin:$PATH
Conclusion
Although Disnix has not been strictly designed for this purpose, I have described in this blog post how Disnix can be used as a remote package deployer by using a relevant subset of Disnix features.
Moreover, I now consider the underlying Disnix primitives to be mature enough. As such, I am announcing the release of Disnix 0.6!
Acknowledgements
I gained the inspiration for writing this blog post from discussions with Matthias Beyer on the #nixos IRC channel.