Thursday, May 19, 2016

Mapping services to containers with Disnix and a new notational convention

In the last couple of months, I have made a number of major changes to the internals of Disnix. As described in a couple of older blog posts, deployment with Disnix is driven by three models each capturing a specific concern:

  • The services model specifies the available distributable components, how to construct them (from source code, intra-dependencies and inter-dependencies), and their types so that they can be properly activated or deactivated on the target machines.
  • The infrastructure model specifies the available target machines and their relevant deployment properties.
  • The distribution model maps services in the service model to target machines in the infrastructure model.

By running the following command-line instruction with the three models as parameters:

$ disnix-env -s services.nix -i infrastructure.nix -d distribution.nix

Disnix executes all required activities to get the system deployed, including building, distributing, activating and deactivating services.

I have always described the final step, the activation phase, as deactivating obsolete and activating new services on the target machines. However, this is an over simplification of what really happens.

In reality, Disnix does more than just carrying out an activation step on a target machine -- to get a service activated or deactivated, Disnix invokes Dysnomia that modifies the state of a so-called container hosting a collection of components. As with components, the definition of a container in Dysnomia is deliberately left abstract and represent anything, such as a Java Servlet container (e.g. Apache Tomcat), a DBMS (e.g. MySQL) or the operating system's service manager (e.g. systemd).

So far, these details were always hidden in Disnix and the container mapping was an implicit operation, which I never really liked. Furthermore, there are situations in which you may want to have more control over this mapping.

In this blog post, I will describe my recent modifications and a new notational convention that can be used to treat containers as first-class citizens.

A modified infrastructure model formalism


Previously, a Disnix infrastructure model had the following structure:

{
  test1 = {
    hostname = "test1.example.org";
    tomcatPort = 8080;
    system = "i686-linux";
  };
  
  test2 = {
    hostname = "test2.example.org";
    tomcatPort = 8080;
    mysqlPort = 3306;
    mysqlUsername = "root";
    mysqlPassword = "admin";
    system = "x86_64-linux";
    numOfCores = 1;
    targetProperty = "hostname";
    clientInterface = "disnix-ssh-client";
  }; 
}

The above Nix expression is an attribute set in which each key corresponds to a target machine in the network and each value is an attribute set containing arbitrary machine properties.

These properties are used for a variety of deployment activities. Disnix made no hard distinction between them -- some properties have a special meaning, but most of them could be freely chosen, yet this does not become clear from the model.

In the new notational convention, the target machine properties have been categorized:

{
  test1 = {
    properties = {
      hostname = "test1.example.org";
    };
    
    containers = {
      tomcat-webapplication = {
        tomcatPort = 8080;
      };
    };
    
    system = "i686-linux";
  };
  
  test2 = {
    properties = {
      hostname = "test2.example.org";
    };
    
    containers = {
      tomcat-webapplication = {
        tomcatPort = 8080;
      };
      
      mysql-database = {
        mysqlPort = 3306;
        mysqlUsername = "root";
        mysqlPassword = "admin";
      };
    };
    
    system = "x86_64-linux";
    numOfCores = 1;
    targetProperty = "hostname";
    clientInterface = "disnix-ssh-client";
  }; 
}

The above expression has a more structured notation:

  • The properties attribute refers to arbitrary machine-level properties that are used at build-time and to connect from the coordinator to the target machine.
  • The containers attribute set defines the available container services on a target machine and their relevant deployment properties. The container properties are used at build-time and activation time. At activation time, they are passed as parameters to the Dysnomia module that activates a service in the corresponding container.
  • The remainder of the target attributes are optional system properties. For example, targetProperty defines which attribute in properties contains the address to connect to the target machine. clientInterface refers to the executable that establishes a remote connection, system defines the system architecture of the target machine (so that services will be correctly built for it), and numOfCores defines how many concurrent activation operations can be executed on the target machine.

As may have become obvious, in the new notation it becomes clear what container services the target machine provides, whereas in the old notation they were hidden.

An alternative distribution model notation


I have also introduced an alternative notation for mappings in the distribution model. A traditional Disnix distribution model typically looks as follows:

{infrastructure}:

{
  ...
  StaffService = [ infrastructure.test2 ];
  StaffTracker = [ infrastructure.test1 infrastructure.test2 ];
}

In the above expression, each attribute name refers to a service in the service model and each value to a list of machines in the infrastructure model.

As explained earlier, besides deploying a service to a machine, a service also gets deployed to a container hosted on the machine, which is not reflected in the distribution model.

When using the above notation, Disnix executes a so-called auto mapping strategy to containers. It simply takes the type attribute from the services model (which is used to determine which Dysnomia module to use that carries out the activation and deactivation steps):

StaffTracker = {
  name = "StaffTracker";
  pkg = customPkgs.StaffTracker;
  dependsOn = {
    inherit GeolocationService RoomService;
    inherit StaffService ZipcodeService;
  };
  type = "tomcat-webapplication";
};

and deploys the service to the container with the same name as the type. For example, all services of type: tomcat-webapplication will be deployed to a container named: tomcat-webapplication (and uses the Dysnomia module named: tomcat-webapplication to activate or deactivate it).

In most cases auto-mapping suffices -- we typically only run one container service on a machine, e.g. one MySQL DBMS, one Apache Tomcat application server. That is why the traditional notation remains the default in Disnix.

However, sometimes it may also be desired to have more control over the container mappings. The new Disnix also supports an alternative and more verbose notation. For example, the following mapping of the StaffTracker service is equivalent to the traditional mapping shown in the previous distribution model:

StaffTracker = {
  targets = [ { target = infrastructure.test1; } ];
};

We can use the alternative notation to control the container mapping, for example:

{infrastructure}:

{
  ...

  StaffService = {
    targets = [
      { target = infrastructure.test1;
        container = "tomcat-production";
      }
    ];
  };
  StaffTracker = {
    targets = [
      { target = infrastructure.test1;
        container = "tomcat-test";
      }
    ];
  };
};

By adding the container attribute to a mapping, we can override the auto mapping strategy and specify the name of the container that we want to deploy to. This alternative notation allows us to deploy to a container whose name does not match the type or to manage networks of machines having multiple instances of the same container deployed.

For example, in the above distribution model, both services are Apache Tomcat web applications. We map StaffService to a container called: tomcat-production and StaffTracker to a container called: tomcat-test. Both containers are hosted on the same machine: test1.

A modified formalism to refer to inter-dependency parameters


As a consequence of modifying the infrastructure and distribution model notations, referring to inter-dependency parameters in Disnix expressions also slightly changed:

{stdenv, StaffService}:
{staff}:

let
  contextXML = ''
    <Context>
      <Resource name="jdbc/StaffDB" auth="Container"
        type="javax.sql.DataSource"
        maxActivate="100" maxIdle="30" maxWait="10000"
        username="${staff.target.container.mysqlUsername}"
        password="${staff.target.container.mysqlPassword}"
        driverClassName="com.mysql.jdbc.Driver"
        url="jdbc:mysql://${staff.target.properties.hostname}:${toString (staff.target.container.mysqlPort)}/${staff.name}?autoReconnect=true" />
    </Context>
  '';
in
stdenv.mkDerivation {
  name = "StaffService";
  buildCommand = ''
    mkdir -p $out/conf/Catalina
    cat > $out/conf/Catalina/StaffService.xml <<EOF
    ${contextXML}
    EOF
    ln -sf ${StaffService}/webapps $out/webapps
  '';
}

The above example is a Disnix expression that configures the StaffService service. The StaffService connects to a remote MySQL database (named: staff) which is provided as an inter-dependency parameter. The Disnix expression uses the properties of the inter-dependency parameter to configure a so called context XML file which Apache Tomcat uses to establish a (remote) JDBC connection so that the web service can connect to it.

Previously, each inter-dependency parameter provided a targets sub attribute referring to targets in the infrastructure model to which the inter-dependency has been mapped in the distribution model. Because it is quite common to map to a single target only, there is also a target sub attribute that refers to the first element for convenience.

In the new Disnix, the targets now refer to container mappings instead of machine mappings and implement a new formalism to reflect this:

  • The properties sub attribute refers to the machine level properties in the infrastructure model
  • The container sub attribute refers to the container properties to which the inter-dependency has been deployed.

As can be observed in the expression shown above, both sub attributes are used in the above expression to allow the service to connect to the remote MySQL database.

Visualizing containers


Besides modifying the notational conventions and the underlying deployment mechanisms, I have also modified disnix-visualize to display containers. The following picture shows an example:



In the above picture, the light grey boxes denote machines, the dark grey boxes denote containers, the ovals services and the arrows inter-dependency relationships. In my opinion, these new visualizations are much more intuitive -- I still remember that in an old blog post that summarizes my PhD thesis I used a hand-drawn diagram to illustrate why deployments of service-oriented systems were complicated. In this diagram I already showed containers, yet in the visualizations generated by disnix-visualize they were missing. Now finally, this mismatch has been removed from the tooling.

(As a sidenote: it is still possible to generate the classic non-containerized visualizations by providing the: --no-containers command-line option).

Capturing the infrastructure model from the machines' Dysnomia container configuration files


The new notational conventions also makes it possible to more easily implement yet another use case. As explained in an earlier blog post, when it is desired to deploy services with Disnix, we need predeployed machines running Nix, Dysnomia and Disnix installed and a number of container services (such as MySQL and Apache Tomcat) first.

After deploying the machines, we must hand-write an infrastructure model reflecting their properties. Hand writing infrastructure models is sometimes tedious and error prone. In my previous blog post, I have shown that it is possible to automatically generate Dysnomia container configuration files from NixOS configurations that capture properties of entire machine configurations.

We can now also do the opposite: generating an expression of a machines' Dysnomia container configuration files and compose an infrastructure model from it. This takes away the majority of the burden of handwriting infrastructure models.

For example, we can write a Dysnomia-enabled NixOS configuration:

{config, pkgs, ...}:

{
  services = {
    openssh.enable = true;
    
    mysql = {
      enable = true;
      package = pkgs.mysql;
      rootPassword = ../configurations/mysqlpw;
    };
    
    tomcat = {
      enable = true;
      commonLibs = [ "${pkgs.mysql_jdbc}/share/java/mysql-connector-java.jar" ];
      catalinaOpts = "-Xms64m -Xmx256m";
    };
  };
  
  dysnomia = {
    enable = true;
    enableAuthentication = true;
    properties = {
      hostname = config.networking.hostName;
      mem = "$(grep 'MemTotal:' /proc/meminfo | sed -e 's/kB//' -e 's/MemTotal://' -e 's/ //g')";
    };
  };
}

The above NixOS configuration deploys two container services: MySQL and Apache Tomcat. Furthermore, it defines some non-functional machine-level properties, such as the hostname and the amount of RAM (mem) the machine has (which is composed dynamically by consulting the kernel's /proc filesystem).

As shown in the previous blog post, when deploying the above configuration with:

$ nixos-rebuild switch

The Dysnomia NixOS module automatically composes the /etc/dysnomia/properties and /etc/dysnomia/containers configuration files. When running the following command:

$ dysnomia-containers --generate-expr
{
  properties = {
    "hostname" = "test1";
    "mem" = "1023096";
    "supportedTypes" = [
      "mysql-database"
      "process"
      "tomcat-webapplication"
    ];
    "system" = "x86_64-linux";
  };
  containers = {
    mysql-database = {
      "mysqlPassword" = "admin";
      "mysqlPort" = "3306";
      "mysqlUsername" = "root";
    };
    tomcat-webapplication = {
      "tomcatPort" = "8080";
    };
  };
}

Dysnomia generates a Nix expression of the general properties and container configuration files.

We can do the same operation in a network of machines by running the disnix-capture-infra tool. First, we need to write a very minimal infrastructure model that only captures the connectivity attributes:

{
  test1.properties.hostname = "test1";
  test2.properties.hostname = "test2";
}

When running:

$ disnix-capture-infra infrastructure-basic.nix
{
  test1 = {
    properties = {
      "hostname" = "test1";
      "mem" = "1023096";
      "supportedTypes" = [
        "mysql-database"
        "process"
        "tomcat-webapplication"
      ];
      "system" = "x86_64-linux";
    };
    containers = {
      mysql-database = {
        "mysqlPassword" = "admin";
        "mysqlPort" = "3306";
        "mysqlUsername" = "root";
      };
      tomcat-webapplication = {
        "tomcatPort" = "8080";
      };
    };
  };
  test2 = ...
}

Disnix captures the configurations of all machines in the basic infrastructure model and returns an augmented infrastructure model containing all its properties.

(As a sidenote: disnix-capture-infra is not the only infrastructure model generator I have developed. In the self-adaptive adaptive deployment framework built on top of Disnix, I have developed an Avahi-based discovery service that can also generate infrastructure models. It is also more powerful (but quite hacky and immature) because it dynamically discovers the machines in the network, so it does not require a basic infrastructure model to be written. Moreover, it automatically responds to events when a machine's configuration changes.

I have modified the Avahi-based discovery tool to use Dysnomia's expression generator as well.

Also, the DisnixOS toolset can generate infrastructure models from networked NixOS configurations).

Discussion


In this blog post, I have described the result of a number of major internal changes to Disnix that make the containers concept a first class citizen. Fortunately, from an external perspective the changes are minor, but still backwards incompatible -- we must follow a new convention for the infrastructure model and refer to the target properties of inter-dependency parameters in a slightly different way.

In return you will get:

  • A more intuitive notation. As explained, we do not only deploy to a machine, but also to a container hosted on the machine. Now the deployment models and corresponding visualizations reflect this concept.
  • More control and power. We can deploy to multiple containers of the same type on the same machine, e.g. we can have two MySQL DBMSes on the same machine.
  • More correctness. When activating or deactivating a service all infrastructure properties were propagated as parameters to the corresponding Dysnomia module. Why does mysql-database module needs to know about a postgresql-database and vice versa? Now Dysnomia modules only get to know what they need to know.
  • Discovery. We can generate the infrastructure model from Dysnomia container configuration hosted on the target machines with relative ease.

A major caveat is that deployment planning (implemented in the Dynamic Disnix framework) can also potentially be extended from machine-level to container-level.

At the moment, I did make these modifications. This means that Dynamic Disnix can still generate distribution models, but only on machine level. As a consequence, Dynamic Disnix only allows a user to refer to a target's machine-level properties (i.e. the properties attribute in the infrastructure model) for deployment planning purposes, and not to any container-specific properties.

Container-level deployment planning is also something I intend to support at some point in the future.

Availability


The new notational conventions and containers concepts are part of the development version of Disnix and will become available in the next release. Moreover, I have modified the Disnix examples to use the new notations.

No comments:

Post a Comment