Monday, August 12, 2019

A new input model transformation pipeline for Disnix

As explained in earlier blog posts, Disnix (as well as other tools in the Nix project) are driven by declarative specifications -- instead of describing the activities that need to be carried out to deploy a system (such as building and distributing packages), we specify all the relevant properties of a service-oriented system:

  • The services model describes all the services that can be deployed to target machines in a network, how they can be built from their sources, how they depend on each other and what their types are, so that the deployment system knows how they can be activated.
  • The infrastructure model captures all target machines in the network, their properties, and the containers they provide. Containers in a Disnix-context are services that manage the life-cycle of a component, such as an application server, service manager or database management service (DBMS).
  • The distribution model maps services to containers on the target machines.

By running the following command-line instruction:

$ disnix-env -s services.nix -i infrastructure.nix -d distribution.nix

Disnix infers all the activities that need to be executed to get the system in a running state, such as building packages from source code (or downloading substitutes from a binary cache), the distribution of packages, the activation of a system and taking and restoring state snapshots.

Conceptually, this approach may sound very simple but the implementation that infers the deployment process is not. Whilst the input models are declarative, they are not executable -- there is not a one-on-one mapping between properties in the input models and the activities that Disnix needs to carry out.

To be able to execute deployment activities, Disnix transforms the three input models into a single declarative specification (called a deployment manifest file) that contains one-on-one mappings between deployment artifacts (e.g. Nix profiles, Nix packages and snapshots) and deployment targets (the target machines and/or container services). The transformation pipeline fills in the blanks with default settings, and transforms the input models into several intermediate representations, before it gets transformed into the manifest file.

So far, the intermediate representations and final result were never well defined. Instead, they have organically evolved and were heavily revised several times. As a result of adding new features and not having well defined representations, it became very hard to make changes and reason about the correctness of the models.

In my previous blog post, I have developed libnixxml to make the integration between a data model defined in the Nix expression language and external tools (that implement deployment activities that Nix does not support) more convenient. I am primarily using this library to simplify the integration of manifest files with Disnix tools.

As an additional improvement, I have revised the transformation pipeline, with well-defined intermediate representations. Besides a better quality transformation pipeline with well-defined intermediate stages, the Disnix toolset can now also take the intermediate model representations as input parameters, which is quite convenient for integration with external tooling and experimentation purposes. Furthermore, a new input model has been introduced.

In the blog post, I will describe the steps in the transformation pipeline, and the intermediate representations of the deployment models.

Separated concerns: services, infrastructure, distribution models


As explained earlier in this blog post, Disnix deployment are primarily driven by three input models: the services, infrastructure and distribution models. The reason why I have picked three input models (as opposed to a single configuration file) is to separate concerns and allow these concerns to be reused in different kinds of deployment scenarios.

For example, we can write a simple services model (services.nix) that describes two services that have an inter-dependency on each other:

{distribution, invDistribution, system, pkgs}:

let customPkgs = import ../top-level/all-packages.nix { 
  inherit system pkgs;
};
in
rec {
  HelloMySQLDB = {
    name = "HelloMySQLDB";
    pkg = customPkgs.HelloMySQLDB;
    dependsOn = {};
    type = "mysql-database";
  };

  HelloDBService = {
    name = "HelloDBService";
    pkg = customPkgs.HelloDBServiceWrapper;
    dependsOn = {
      inherit HelloMySQLDB;
    };
    type = "tomcat-webapplication";
  };
}

The above services model captures two services with the following properties:

  • The HelloMySQLDB services refers to a MySQL database backend that stores data. The type property: mysql-database specifies which Dysnomia module should be used to manage the lifecycle of the service. For example, the mysql-database Dysnomia module will create the database on initial startup.
  • The HelloDBService is a web service that exposes the data stored in the database backend to the outside it world. Since it requires the presence of a MySQL database backend and needs to know where it has been deployed, the database backend been declared as an inter-dependency of the service (by means of the dependsOn attribute).

    The tomcat-webapplication type specifies that Disnix should use the Apache Tomcat Dysnomia module, to activate the corresponding Java-based web service inside the Apache Tomcat servlet container.

The services model captures the aspects of a service-oriented system from a functional perspective, without exposing much of the details of the environments they may run in. This is intentional -- the services are meant to be deployed to a variety of environments. Target agnostic services make it possible, for example, to write an infrastructure model of a test environment (infrastructure-test.nix):

{
  test1 = {
    properties = {
      hostname = "test1.example.org";
    };
    
    containers = {
      tomcat-webapplication = {
        tomcatPort = 8080;
      };
    };
  };
  
  test2 = {
    properties = {
      hostname = "test2.example.org";
    };
    
    containers = {
      tomcat-webapplication = {
        tomcatPort = 8080;
      };
      
      mysql-database = {
        mysqlPort = 3306;
        mysqlUsername = "mysqluser";
        mysqlPassword = builtins.readFile ./mysqlpw;
      };
    };
  };
}

and a distribution model that maps the services to the target machines in the infrastructure model (distribution-test.nix):

{infrastructure}:

{
  HelloMySQLDB = [ infrastructure.test2 ];
  HelloDBService = [ infrastructure.test1 ];
}

With these three deployment models, we can deploy a system to a test environment, by running:

$ disnix-env -s services.nix \
  -i infrastructure-test.nix \
  -d distribution-test.nix

and later switch to a production environment using the same functional services model, after the system has been properly validated in the test environment:

$ disnix-env -s services.nix \
  -i infrastructure-prod.nix \
  -d distribution-prod.nix

Similarly, we can adjust the distribution model to only deploy a sub set of the services of a system for, say, experimentation purposes.

Unifying the input models into a single specification: the deployment architecture model


The first step in transforming the input models into a single executable specification, is unifying the specifications into one single declarative specification, that I will call the deployment architecture model. The name is derived from the concept of deployment architectures in software architecture terminology:
a description that specifies the distribution of software components over hardware nodes.

A Disnix deployment architecture model may look as follows:

{system, pkgs}:

let customPkgs = import ../top-level/all-packages.nix { 
  inherit system pkgs;
};
in
rec {
  services = rec {
    HelloMySQLDB = {
      name = "HelloMySQLDB";
      pkg = customPkgs.HelloMySQLDB;
      dependsOn = {};
      type = "mysql-database";

      targets = [ infrastructure.test2 ];
    };

    HelloDBService = {
      name = "HelloDBService";
      pkg = customPkgs.HelloDBServiceWrapper;
      dependsOn = {
        inherit HelloMySQLDB;
      };
      type = "tomcat-webapplication";

      targets = [ infrastructure.test1 ];
    };
  };

  infrastructure = {
    test1 = {
      properties = {
        hostname = "test1.example.org";
      };
    
      containers = {
        tomcat-webapplication = {
          tomcatPort = 8080;
        };
      };
    };
  
    test2 = {
      properties = {
        hostname = "test2.example.org";
      };
    
      containers = {
        tomcat-webapplication = {
          tomcatPort = 8080;
        };
      
      mysql-database = {
        mysqlPort = 3306;
        mysqlUsername = "mysqluser";
        mysqlPassword = builtins.readFile ./mysqlpw;
      };
    };
  };
}

The above deployment architecture defines has the following properties:

  • The services and infrastructure models are unified into a a single attribute set in which the services attribute refers to the available services and infrastructure attribute to the available deployment targets.
  • The separated distribution concern is completely eliminated -- the mappings in the distribution models are augmented to the corresponding services, by means of the targets attribute. The transformation step basically checks whether no targets property was specified already, and if there is not -- it will consider the targets in the distribution model the deployment targets of the service.

    The fact that the targets attribute will not be overridden, also makes it possible to already specify the targets in the services model, if desired.

In addition to the three deployment models, it is now also possible as an end-user to write a deployment architecture model and use that to automate deployments. The following command-line instruction will deploy a service-oriented system from a deployment architecture model:

$ disnix-env -A architecture.nix

Normalizing the deployment architecture model


Unifying models into a single deployment architecture specification is a good first step in producing an executable specification, but more needs to be done to fully reach that goal.

There are certain deployment properties that are unspecified in the examples shown earlier. For some configuration properties, Disnix provides reasonable default values, such as:

  • Each service can indicate whether they want their state to be managed by Dysnomia (with the property deployState), so that data will automatically be migrated when moving the service from one machine to another. The default setting is false and can be overridden with the --deploy-state parameter.

    If a service does not specify this property then Disnix will automatically propagate the default setting as a parameter.
  • Every target machine in the infrastructure model also has specialized settings for connecting to the target machines, building packages and running tasks concurrently:

    test2 = {
      properties = {
        hostname = "test2.example.org";
      };
        
      containers = {
        tomcat-webapplication = {
          tomcatPort = 8080;
        };
          
        mysql-database = {
          mysqlPort = 3306;
          mysqlUsername = "mysqluser";
          mysqlPassword = builtins.readFile ./mysqlpw;
        };
    
        clientInterface = "disnix-ssh-client";
        targetProperty = "hostname";
        numOfCores = 1;
        system = "x86_64-linux";
      };
    };
    

    If none of these advanced settings are provided, Disnix will assume that the every target machine has the same system architecture (system) as the coordinator machine (so that the Nix package manager does not have to delegate a build to a machine that has a compatible architecture), we use the Disnix SSH client (disnix-ssh-client) interface executable (clientInterface) to connect to the target machine (using the hostname property as a connection string) and we only run one activity per target machine concurrently: numOfCores.

In addition to unspecified properties (that need to be augmented with default values), we also have properties that are abstract specifications. These specifications need to be translated into more concrete representations:

  • As explained in an older blog post, the targets property -- that maps services to targets -- does not only map services to machines, but also to container services hosted on that machine. In most cases, you will only use one container instance per service type -- for example, running two MySQL DBMS services (e.g. one on TCP port 3306 and another on 3307) is far less common use case scenario.

    If no container mapping is provided, Disnix will do an auto-mapping to a container service that corresponds to the service's type property.

    The MySQLDBService's targets property shown in the last deployment architecture model gets translated into the following property:

    {system, pkgs}:
    
    rec {
      services = rec {
        HelloMySQLDB = {
          name = "HelloMySQLDB";
          ...
    
          targets = [
            rec {
              selectedContainer = "mysql-database";
    
              container = {
                mysqlPort = 3306;
                mysqlUsername = "mysqluser";
                mysqlPassword = builtins.readFile ./mysqlpw;
              };
    
              properties = {
                hostname = "test2.example.org";
              };
    
              clientInterface = "disnix-ssh-client";
              targetProperty = "hostname";
              numOfCores = 1;
              system = "x86_64-linux";
            }
          ];
        };
      };
    
      infrastructure = ...
    }
    

    As may be observed, the target provides a selectedContainer property to indicate to what container the service needs to be deployed. The properties of all the containers that the service does not need to know about are discarded.
  • Another property that needs to be extended is the inter-dependency specifications (dependsOn and connectsTo). Typically, inter-dependency specifications are only specified on a functional level -- a service typically only specifies that it depends on another service disregarding the location where that service may have been deployed to.

    If no target location is specified, then Disnix will assume that the service has an inter-dependency on all possible locations where that dependency may be deployed. If an inter-dependency is redundantly deployed, then that service also has an inter-dependency on all redundant replicas.

    The fact that it is also possible to specify the targets of the inter-dependencies, makes it also possible to optimize certain deployments. For example, you can also optimize a service's performance by forcing it to bind to an inter-dependency that is deployed to the same target machine, so that it will not be affected by slow network connectivity.

    The dependsOn property of the HelloDBService will translate to:

    dependsOn = {
      HelloMySQLDB = {
        name = "HelloMySQLDB";
        pkg = customPkgs.HelloMySQLDB;
        dependsOn = {};
        type = "mysql-database";
    
        targets = [
          {
            selectedContainer = "mysql-database";
    
            container = {
              mysqlPort = 3306;
              mysqlUsername = "mysqluser";
              mysqlPassword = builtins.readFile ./mysqlpw;
            };
    
            properties = {
              hostname = "test2.example.org";
            };        
          }
        ];
      };
    };
    

    In the above code fragment, the inter-dependency has been augmented with a targets property corresponding to the targets where that inter-dependency has been deployed to.


The last ingredient to generate an executable specification is building the services from source code so that we can map their build results to the target machines. To accomplish this, Disnix generates two invisible helper attributes for each service:

HelloDBService = {
  name = "HelloDBService";
  pkg = customPkgs.HelloDBServiceWrapper;
  dependsOn = {
    inherit HelloMySQLDB;
  };
  type = "tomcat-webapplication";

  ...

  _systemsPerTarget = [ "x86_64-linux" "x86_64-darwin" ];
  _pkgsPerSystems = {
    "x86_64-linux" = "/nix/store/91abq...-HelloDBService";
    "x86_64-darwin" = "/nix/store/f1ap2...-HelloDBService";
  };
};

The above code example shows the two "hidden" properties augmented to the HelloDBService:

  • The _systemsPerTarget specifies for which CPU architecture/operating systems the service must be built. Normally, services are target agnostic and should always yield the same Nix store path (with a build that is nearly bit-identical), but the system architecture of the target machine is an exception to deviate from this property -- it is also possible to deploy the same service to different CPU architectures/operating systems. In such cases the build result could be different.
  • The _pkgsPerSystem specifies for each system architecture, the Nix store path to the build result. A side effect of evaluating the Nix store path is the service also gets built from source code.

Finally, it will compose a deployment architecture model attribute named: targetPackages that refers to a list of Nix store paths to be distributed to each machine in the network:

{
  targetPackages = {
    test1 = [
      "/nix/store/91abq...-HelloDBService"
    ];

    test2 = [
      "/nix/store/p9af1...-HelloMySQLDB"
    ];
  };

  services = ...
  infrastructure = ...
}

The targetPackages attribute is useful for a variety of reasons, as we will see later.

Generating a deployment model


With a normalized architecture model, we can generate an executable specification that I will call a deployment model. The deployment model can be used for executing all remaining activities after the services have been built.

An example of a deployment model could be:

{
  profiles = {
    test1 = "/nix/store/...-test1";
    test2 = "/nix/store/...-test2";
  };

  services = {
    "ekfekrerw..." = {
      name = "HelloMySQLDB";
      pkg = "/nix/store/...";
      type = "mysql-database";
      dependsOn = [
      ];
      connectsTo = [
      ];
    };

    "dfsjs9349..." = {
      name = "HelloDBService";
      pkg = "/nix/store/...";
      type = "tomcat-webapplication";
      dependsOn = [
        { target = "test1";
          container = "mysql-database";
          service = "ekfekrerw...";
        }
      ];
      connectsTo = [
      ];
    };
  };

  infrastructure = {
    test1 = {
      properties = {
        hostname = "test1.example.org";
      };
      containers = {
        apache-webapplication = {
          documentRoot = "/var/www";
        };
      };
      system = "x86_64-linux";
      numOfCores = 1;
      clientInterface = "disnix-ssh-client";
      targetProperty = "hostname";
    };
    test2 = {
      properties = {
        hostname = "test2.example.org";
      };
      containers = {
        mysql-database = {
          mysqlPort = "3306";
        };
      };
      system = "x86_64-linux";
      numOfCores = 1;
      clientInterface = "disnix-ssh-client";
      targetProperty = "hostname";
    };
  };

  serviceMappings = [
    { service = "ekfekrerw...";
      target = "test2";
      container = "mysql-database";
    }
    { service = "dfsjs9349...";
      target = "test1";
      container = "tomcat-webapplication";
    }
  ];

  snapshotMappings = [
    { service = "ekfekrerw...";
      component = "HelloMySQLDB";
      container = "mysql-database";
      target = "test2";
    }
  ];
}

  • The profiles attribute refers to Nix profiles mapped to target machines and is derived from the targetPackages property in the normalized deployment architecture model. From the profiles property Disnix derives all steps of the distribution phase in which all packages and their intra-dependencies are copied to machines in the network.
  • The services attribute refers to all services that can be mapped to machines. The keys in this attribute set are SHA256 hash codes recursively computed from the Nix store path of the package, the type, and all the inter-dependency mappings. Using hash codes to identify the services makes it possible to easily see whether a service is identical to another or not (by comparing hash codes), so that upgrades can be done more efficiently.
  • The infrastructure attribute is unchanged compared to the deployment architecture model and still stores target machine properties.
  • The serviceMappings attribute maps services in the services attribute set, to target machines in the network stored in the infrastructure attribute set and containers hosted on the target machines.

    From these mappings, Disnix can derive the steps to activate and deactivate the services of which a system is composed, ensure that all dependencies are present and that the services are activated or deactivated in the right order.
  • The snapshotMappings attribute state that for each services mapped to a target machines and container, we also want to migrate the state (by taking and restoring snapshots) if the service gets moved from one machine to another.

Although a deployment model is quite low-level, it is now also possible to manually write one, and deploy it by running:

$ disnix-env -D deployment.nix

disnix-env invokes an external executable called: disnix-deploy that executes the remaining activities of deployment process after the build process succeeds. disnix-depoy as well as the tools that execute individual deployment activities are driven by manifest files. A manifest file is simply a one-on-one translation of the deployment model in the Nix expression language to XML following the NixXML convention.

Generating a build model


To build the services from source code, Disnix simply uses Nix's build facilities to execute the build. If nothing special has been configured, all builds will be executed on the coordinator machine, but this may not always be desired.

Disnix also facilitates heterogeneous architecture support. For example, if the coordinator machine is a Linux machine and a target machine is macOS (which is not compatible with the Linux system architecture), then Nix should delegate the build to a remote machine that is capable of building it. This is not something that Disnix handles for you out of the box -- you must configure Nix yourself to allow builds to be delegated.

It is also possible to optionally let Disnix delegate builds to the target machines in the network. To make build delegation work, Disnix generates a build model from a normalized deployment architecture model:

{
  derivations = [
    { "/nix/store/HelloMySQLDB-....drv"; interface = "test1"; }
    { "/nix/store/HelloDBService-....drv"; interface = "test2"; }
  ];

  interfaces = {
    test1 = {
      targetAddress = "test1.example.org";
      clientInterface = "disnix-ssh-client";
    };

    test2 = {
      targetAddress = "test2.example.org";
      clientInterface = "disnix-ssh-client";
    };
  };
}

The build model shown above defines the following properties:

  • The derivations attribute maps Nix store derivation files (low-level Nix specifications that capture build procedures and dependencies) to machines in the network that should perform the build. This information is used by Disnix to delegate store derivation closure to target machines, use Nix to build the packages remotely, and fetch the build results back to the coordinator machine.
  • The interfaces attribute is a sub set of the infrastructure model that contains the connectivity settings for each target machine.

By running the following command, you can execute a build model to delegate builds to remote machines and fetch their results back:

$ disnix-delegate -B build.nix

If the build delegation option is enabled (for example, by passing --build-on-targets parameter to disnix-env) then Disnix will work with a so-called distributed derivation file. Similar to a manifest file, a distributed derivation file is a one-on-one translation from the build model written in the Nix expression language to XML using the NixXML convention.

Packages model


In the normalized architecture model and deployment model, we generate a targetPackages property that we can use to compose Nix profiles with packages from.

For a variety of reasons, I thought it would also be interesting to give the user direct control to use this property. A new feature in Disnix is that you can now also write a packages model:

{pkgs, system}:

{
  test1 = [
    pkgs.mc
  ];

  test2 = [
    pkgs.wget
    pkgs.curl
  ];
}

The above packages model says that we should distribute the Midnight Commander package to the test1 machine, and wget and curl to the test2 machine.

Running the following command will deploy the packages to the target machines in the network:

$ disnix-env -i infrastructure.nix -P pkgs.nix

You can also combine the three common Disnix models with a package model:

$ disnix-env -s services.nix \
  -i infrastructure.nix \
  -d distribution.nix \
  -P pkgs.nix

then Disnix will deploy the services that are distributed to target machines and the supplemental packages defined in the packages model.

The packages model is useful for a variety of reasons:

  • Although it is already possible to use Disnix as a simple package deployer (by setting the types of services to: package), the packages model approach makes it even easier. Furthermore, you can also more easily specify sets of packages for target machines. The only thing you cannot do is deploying packages that have inter-dependencies on services, e.g. a client that is preconfigured to connect to a service.
  • The hybrid approach makes it possible to more smoothly make a transition to Disnix when automating the deployment process of a system. You can start by managing the dependencies with Nix, then package pieces of the project as Nix packages, then use Disnix to deploy them to remote machines, and finally turn pieces of the system into services that can be managed by Disnix.

Conclusion


In this blog post, I have described a new transformation pipeline in Disnix with well-defined intermediate steps that transforms the input models to a deployment model that is consumable by the tools that implement the deployment activities.

The following diagram summarizes the input models, intermediate models and output models:


The new transformation pipeline has the following advantages over the old infrastructure:

  • The implementation is much easier to maintain and we can more easily reason about its correctness
  • We have access to a broader range of configuration properties. For example, it was previously not possible to select the targets of the inter-dependencies.
  • The output models: deployment and build models are much more easily consumable by the Disnix tools that execute the remainder of the deployment activities. The domain models in the code also closely resemble the structure of the build and deployment models. This can also be partially attributed to libnixxml that I have described in my previous blog post.
  • We can more easily implement new input models, such as the packages model.
  • The implementation of the disnix-reconstruct tool that reconstructs the manifest on the coordinator machine from metadata stored on the target machines also has become much simpler -- we can get rid of most of the custom code and generate a deployment model instead.

Availability


The new pipeline is available in the current development version of Disnix and will become available for general use in the next Disnix release.

The deployment models described in this blog post are incompatible with the manifest file format used in the last stable release of Disnix. This means that after upgrading Disnix, you need to convert any previous deployment configuration by running the disnix-convert tool.

No comments:

Post a Comment