The main reason why these model transformations are required is because Disnix takes multiple high-level models as input parameters. Each model captures a specific concern:
- The services model captures all distributable components of which a service-oriented system consists including their inter-dependencies.
- The infrastructure model captures all target machines in the network and all their relevant configuration properties.
- The distribution model maps services in the services model, to target machines in the infrastructure model (and container services deployed to a machine).
- The packages model supplements the machines with additional packages that are not required by any specific service.
The above specifications can be considered declarative specifications, because they specify relevant properties of a service-oriented system rather than the activities that need to be carried out to deploy the system, such as transferring the binary package versions of the services to the target machines in the network and the activation of the services.
The activities that need to be executed to deploy a system are derived automatically from the declarative input models.
To more easily derive these activities, Disnix translates the above input models to a single declarative specification (called a deployment model) that is also an executable specification -- in this model there is a one-on-one mapping between deployment artifacts, e.g. packages and snapshots. and deployment targets (e.g. machines and container services). For each of these mappings, we can easily derive the deployment activities that we need to execute.
In addition to a deployment model, Disnix can also optionally delegate package builds to target machines in the network by using a build model.
Transforming the input models to a deployment model is not very straight forward. To cope with the complexity, and give the user more abilities to experiment and integrate with Disnix, it first coverts the inputs model to two intermediate representations:
- The deployment architecture model unifies the input models into one single declarative specification, and eliminates the distribution concern by augmenting the targets in the distribution model to the corresponding services.
- The normalized deployment architecture model augments all services and targets with missing default properties and translates/desugars high-level deployment properties into unambiguous low-level properties.
Although I am quite happy with this major revision and having these well defined intermediate models (it allowed me to improve quality as well as exposing more configuration options), the normalization of the deployment architecture model still remained relatively complicated.
With the addition of service container providers, the normalization process became so complicated that I was forced to revise it again.
"Using references" in the Nix expression language
After some thinking, I realized that my biggest problem was dealing with the absence of "references" and dynamic binding in the Nix expression language. For example, in the deployment architecture model, a service might "refer" to a target machine to indicate where it should to be deployed to:
rec { services = { testServiceA = { name = "testServiceA"; targets = [ infrastructure.test1 ]; type = "process"; }; }; infrastructure = { test1 = { properties = { hostname = "test1"; }; containers.process = {}; }; }; }
In the above deployment architecture model, the service: testServiceA has a property: targets that "refers" to machines in the infrastructure section to tell Disnix where the service should be deployed to.
Although the above notation might suggest that the element in the targets list "refers" to the property: infrastructure.test1, what from a logical point of view happens is that we attach a copy of that attribute set to the list. This caused by the fact that in the Nix expression language, data structures are immutable -- they are not modified, but new instances are provided instead.
(As a sidenote: although the "reference" to the test1 target machine manifests itself as a logical copy to the language user, internally the Nix package manager may still use a data structure that resides in the same place in memory.
Older implementations of Nix use a technique called maximal laziness, that is based on the concept of maximal sharing, and as far as I know, current Nix implementations still share data structures in memory, although not as extensively anymore as the maximal laziness implementation).
Immutability has a number of consequences. For example, in Disnix we can override the deployment architecture model in the expression shown above to augment the target machines in the infrastructure section with some default properties (advanced machine properties that you would normally only specify for more specialized use cases):
let pkgs = import <nixpkgs> {}; architecture = import ./architecture.nix; in pkgs.lib.recursiveUpdate architecture { infrastructure.test1.properties = { system = "x86_64-linux"; clientInterface = "disnix-ssh-client"; targetProperty = "hostname"; }; }
The Nix expression takes the deployment architecture model shown in the previous code fragment, and augments the test1 target machine configuration in the infrastructure section with default values for some more specialized deployment settings:
- The system attribute specifies the system architecture of the target machine, so that optionally Nix can delegate the build of a package to another machine that is capable of building it (the system architecture of the coordinator machine might be a different operating system, CPU architecture, or both).
- The clientInterface refers to an executable that can establish a remote connection to the machine. By default Disnix uses SSH, but also other communication protocols can be used.
- The targetProperty attribute refers to the property in properties that can be used as a connection string for the client interface.
The result of the evaluation the expression above is due to immutability, logically speaking, not a modified attribute set, but a new attribute set containing the properties of the original deployment architecture with the advanced settings augmented.
The result of evaluating the expression shown above is the following:
{ services = { testServiceA = { name = "testServiceA"; targets = [ { properties = { hostname = "test1"; }; containers.process = {}; } ]; type = "process"; }; }; infrastructure = { test1 = { properties = { hostname = "test1"; system = "x86_64-linux"; clientInterface = "disnix-ssh-client"; targetProperty = "hostname"; }; containers.process = {}; }; }; }
As you may probably notice, the test1 target machine configuration in the infrastructure section has been augmented with the specialized default properties shown earlier, but the "reference" (that in reality is not a reference) to the machine configuration in testServiceA still refers to the old configuration.
Simulating references
The result shown in the output above is not what I want. What I basically want is that target machine references get updated as well. To cope with this limitation in older Disnix versions, I implemented a very ad-hoc and tedious transformation strategy -- whenever I update a target machine, then I update all the target machine "references" as well.
In early implementations of Disnix, applying this strategy was still somewhat manageable, but over the years things have grown considerable in complexity. In addition to target machines, services in Disnix can also have inter-dependency references to other services, and inter-dependencies also depend on the target machines where the dependencies have been deployed to.
Moreover, target references are not only references to target machines, but also references to container services hosted on that machine. By default, if no target container is specified, then Disnix uses the service's type property to do an automapping.
For example, the following targets mapping of testServiceA:
targets = [ infrastructure.test1 ];
is equivalent to:
targets = { targets = [ { target = infrastructure.test1; container = "process"; } ]; };
In the revised normalization strategy, I first turn all references into reference specifications, that are composed of the attribute keys of the objects they refer to.
For example, consider the following deployment architecture model:
rec { services = { testServiceA = { name = "testServiceA"; targets = [ infrastructure.test1 ]; type = "process"; }; testServiceB = { name = "testServiceB"; dependsOn = { inherit testServiceA; }; targets = [ infrastructure.test1 ]; type = "process"; }; }; infrastructure = { test1 = { properties = { hostname = "test1"; }; containers.process = {}; }; }; }
The above model declares the following properties:
- There is one target machine: test1 with a container service called: process. The container service is a generic container that starts and stops daemons.
- There are two services defined: testServiceA is a service that corresponds to a running process, testServiceB is a process that has an inter-dependency on testServiceA -- in order to work properly, it needs to know how to reach it and it should be activated after testServiceA.
In the normalization process, the model shown above gets "referenized" as follows:
{ services = { testServiceA = { name = "testServiceA"; targets = [ { target = "test1"; container = "process"; } ]; type = "process"; }; testServiceB = { name = "testServiceB"; dependsOn = { testServiceA = { service = "testServiceA"; targets = [ { target = "test1"; container = "process"; } ]; }; }; targets = [ { target = "test1"; container = "process"; } ]; type = "process"; }; }; infrastructure = { test1 = { properties = { hostname = "test1"; system = "x86_64-linux"; clientInterface = "disnix-ssh-client"; targetProperty = "hostname"; }; containers.process = {}; }; }; }
In the above code fragment, the service's targets and inter-dependencies (dependsOn) get translated into reference specifications:
- The targets property of each service gets translated into a list of attribute sets, in which the target attribute refers to the key of the target machine in the infrastructure attribute set, and the container attribute refers to the container in the containers sub attribute of the target machine.
- The dependsOn property of each service gets translated into an inter-dependency reference specification. The service attribute refers to the key in the services section that provided the inter-dependency, and the targets refer to all the target containers where that inter-dependency is deployed to, in the same format as the targets property of a service.
By default, an inter-dependency's targets attribute refers to the same targets as the corresponding service. It is also possible to limit the targets of a dependency to a subset.
To implement the "referenize" strategy shown above, there is a requirement imposed on the "logical units" in the model -- every attribute set representing an object (e.g. a service, or a target) needs a property that allows it to be uniquely identified, so that we can determine what the key in the attribute set is in which it is declared.
In the example above, every target can be uniquely identified by an attribute that serves as the connection string (e.g. hostname). With a lookup table (mapping connection strings to target keys) we can determine the corresponding key in the infrastructure section.
For the services, there is no unique property, so we have to introduce one: name -- the name attribute should correspond to the service attribute set's key.
(As a sidenote: introducing unique identification properties is strictly not required, whenever we encounter a parameter that should be treated as a reference, we could also scan the declaring attribute set for an object that has an identical structure, but this makes things more complicated and slower).
With all relevant objects turned into references, we can apply all our normalization rules, such as translating high-level properties to low-level properties and augmenting unspecified properties with reasonable default values, without requiring us to update all object "references" as well.
Consuming inter-dependency parameters in Disnix expressions
Eventually, we need to "dereference" these objects as well. Every service in the service model, gets build and configured from source and their relevant build inputs, including the service's inter-dependencies:
{stdenv, fetchurl}: {testServiceA}: stdenv.mkDerivation { name = "testServiceB"; src = fetchurl { url = https://.../testServiceA.tar.gz; sha256 = "0242a..."; }; preConfigure = '' cat > config.json <<EOF targetServiceURL=http://${testServiceA.target.hostname} EOF cat > docs.txt <<EOF The name of the first dependency is: ${testServiceA.name} The system target architecture is: ${testServiceA.target.system} EOF ''; configurePhase = "./configure --prefix=$(out)"; buildPhase = "make"; buildInstall = "make install"; }
The Disnix expression shown above is used to build and configure the testServiceB from source code and all its build inputs. It follows a similar convention as ordinary Nix packages, in which a function header defines the build inputs and the body invokes a function that builds and configures the package:
- The outer function header defines the local build and runtime dependencies of the services, also called: intra-dependencies in Disnix. In the example above, we use stdenv to compose an environment with standard build utilities and fetchurl to download a tarball from an external site.
- The inner function header defines all the dependencies on services that may be deployed to remote machines in the network. These are called: inter-dependencies in Disnix. In this paricular example, we have a dependency on service: testServiceA that is defined in the services section of the deployment architecture model.
- In the body, we invoke the stdenv.mkDerivation function to build the service from source code and configure it in such a way that it can connect to testServiceA, by creating a config.json file that contains the connection properties. We also generate a docs.txt file that displays some configuration properties.
The testServiceA parameter refers to the normalized properties of testServiceA in the services section of the normalized deployment architecture model, in which we have "dereferenced" the properties that were previously turned into references.
We also make a few small modifications to the properties of an inter-dependency:
- Since it is so common that a service will be deployed to a single target machine only, there is also a target property that refers to the first element in the targets list.
- The containers property of a target machine will not be accessible -- since you can map a service to one single container only, you can only use the container attribute set that provides access to the properties of the container where the service is mapped to.
- We can also refer to transitive inter-dependencies via the connectsTo attribute, that provides access to the properties all visible inter-dependencies in the same format.
Translation to a deployment model
A normalized deployment architecture model, in which all object references have been translated into reference specifications, can also be more easily translated to a deployment model (the XML representation of the deployment model is called a manifest file in Disnix terminology), an executable model that Disnix can use to derive the deployment activities that need to be executed:
{ services = { "282ppq..." = { name = "testServiceA"; ... }; "q1283l..." = { name = "testServiceB"; ... }; }; serviceMappings = [ { service = "282ppq..."; container = "process"; target = "test1"; } { service = "q1283l..."; container = "process"; target = "test1"; } ]; infrastructure = { test1 = { properties = { hostname = "test1"; system = "x86_64-linux"; clientInterface = "disnix-ssh-client"; targetProperty = "hostname"; }; containers.process = {}; }; }; }
In the above partial deployment model, the serviceMappings list contains mappings of services in the services section, to target machines in the infrastructure section and containers as sub attributes of the target machine.
We can make a straight forward translation from the normalized deployment architecture model. The only thing that needs to be computed is the hash keys of the services, that are derived from the hashes of the Nix store paths of the package builds and some essential target properties, such as the service's type and inter-dependencies.
Discussion
In this blog post, I have described how I have reimplemented the normalization strategy of the deployment architecture model in Disnix, in which I turn sub objects that should represent references into reference specifications, then apply all the required normalizations, and finally dereference the objects when needed, so that most of the normalized properties can be used by the Disnix expressions that build and configure the services.
Aside from the fact that "referenizing" objects as a first step makes the normalization infrastructure IMO simpler, we can also expose more configuration properties to services -- in Disnix expressions, it is now also possible to refer to transitive inter-dependencies and the package builds of inter-dependencies. The latter is useful to ensure that if two services that have an inter-dependency on each other are deployed to the same machine, that the local process management system (e.g. sysvinit scripts, systemd, and many others) can also activate them in the right order after a reboot.
Not having a references and dynamic binding is a general limitation of the Nix expression language and coping with that is not a problem exclusive to Disnix. For example, the Nixpkgs collection originally used to declare one big top-level attribute set with function invocations, such as:
rec { stdenv = import ./stdenv { ... }; zlib = import ./development/libraries/zlib { inherit stdenv; }; file = import ./tools/misc/file { inherit stdenv zlib; }; }
For example, the partial Nix expression shown above is a simplified representation of the top-level attribute set in Nixpkgs, containing three packages: stdenv is a standard environment providing basic build utilities, zlib is the zlib library used for compression and decompression, and file is the file tool, that can identify a file's type.
We can attempt to override the package set with a different version of zlib:
let pkgs = import ./top-level.nix; in pkgs // { zlib = import ./development/libraries/zlib-optimised { inherit stdenv; }; }
In the above expression we override the attribute zlib with an optimised variant. The outcome is comparable to what we have seen with the Disnix deployment architecture models -- the following command will build our optimized zlib package:
$ nix-build override.nix -A zlib
but building file (that has a dependency on zlib) still links against the old version (because it was statically bound to the original zlib declaration):
$ nix-build override.nix -A file
One way to cope with this limitation, and to bind dependencies dynamically, is to write the top-level expression as follows:
self: { stdenv = import ./stdenv { ... }; zlib = import ./development/libraries/zlib { inherit (self) stdenv; }; file = import ./tools/misc/file { inherit (self) stdenv zlib; }; }
In the above expression, we have wrapped the attribute set that composes the package in a function and we have introduced a parameter: self, and changed all the function parameters to bind to self.
To be able to build a package, we need another Nix expression that provides a representation of self:
let self = import ./composition1.nix self; in self
We can use the above expression to build any of the packages declared in the previous expression:
$ nix-build pkgs1.nix -A file
The construction used in the expression shown above was very mind blowing to me at first -- it is using a concept called fixed points. In this expression, I provide the variable that provides the end result as a parameter to the composition expression that declares a function. This expression (sort of) simulates the behaviour of a recursive attribute set (rec { ... }).
We can declare a second composition expression (using the same convention as the first) that defines our overrides:
self: { zlib = import ./development/libraries/zlib-optimised { inherit (self) stdenv; }; }
We can "glue" both composition expressions together as follows, making sure that the composition expression shown above overrides the packages in the first composition expression:
let composition1 = import ./composition1.nix self; composition2 = import ./composition2.nix self; self = composition1 // composition2; in self
When we run the following command (pkgs2.nix refers to the Nix expression shown above):
$ nix-build pkgs2.nix -A file
The Nix package manager builds the file package using the optimized zlib library.
Nixpkgs goes much further than the simple dynamic binding strategy that I just described -- Nixpkgs provides a feature called overlays in which you can declare sets of packages as layers that use dynamic binding and can override packages when necessary. It also allows you to refer to all intermediate evaluation stages.
Furthermore, packages in the Nixpkgs set are also overridable -- you can take an existing function invocation and change their parameters, as well as the parameters to the derivation { } function invocation that builds a package.
Since the ability to dynamically bind parameters is a highly useful feature, there have been quite a few discussions in the Nix community about the usefulness of the Nix expression language. The language is a domain-specific language for package/configuration management, but it lacks some of the features that we commonly use (such as dynamic binding). As a result, we heavily rely on custom developed abstractions.
In Disnix, I have decided to not adopt the dynamic binding strategy of Nixpkgs (and the example shown earlier). Instead, I am using a more restricted form of references -- the Disnix models can be considered lightweight embedded DSLs inside an external DSL (the Nix expression language).
The general dynamic binding strategy offers users a lot of freedom -- they can decide to not use the self parameter and implement custom tricks and abstractions, or by mistake incorrectly bind a parameter.
In Disnix, I do not want a target to refer to anything else but a target declared in the infrastructure model. The same thing applies to inter-dependencies -- they can only refer to services in the services model, targets in the infrastructure model, and containers deployed to a target machine.
Related work
I got quite a bit of inspiration from reading Russell O'Connor's blog post about creating fake dynamic bindings in Nix. In this blog post, he demonstrates how recursive attribute sets (rec { }) can be simulated with ordinary attribute sets and fixed points, and how this concept can expanded to simulate "virtual attribute sets" with "fake" dynamic bindings.
He also makes a proposal to extend the Nix expression language with a virtual attribute set (virtual { }) in which dynamic bindings become a first class concept.
No comments:
Post a Comment