Wednesday, April 10, 2013

Setting up a Hydra build cluster for continuous integration and testing (part 2)

In the previous blog post, I have described Hydra -- a Nix-based continuous integration server and I have given an incomplete tour of its features.

In order to be able to use Hydra, we have to set up a build cluster consisting of one or more build machines. In this blog post, I will describe how I have set up a Hydra cluster of three machines.

Prerequisites


To set up a cluster, we need two kinds of machines:

  • We need a build coordinator, with the Nix package manager installed running the three Hydra components: evaluator, queue runner and server. Hydra can be installed on any Linux distribution, but it's more convenient to use NixOS as it provides all the configuration steps as a NixOS module. Other distributions require more manual installation and configuration steps.
  • We need one or more build slaves, or we have to use the coordinator machine as build slave. Various types of build slaves can be used, such as machines running Linux, Mac OS X, FreeBSD and Cygwin.

I have set up 3 machines, consisting of a Linux coordinator, Linux build slave and a Mac OS X build slave, which I will describe in the next sections.

Installing a NixOS build slave


To set up a build slave (regardless of the operating system that we want to use), we need have to install two system services -- we need the Nix package manager and the OpenSSH server so that it can be remotely accessed from the build coordinator.

Setting up a Linux build slave running NixOS is straightforward. I have used the NixOS Live CD to install NixOS. After booting from the Live CD, I first had to configure my harddrive partitions:
$ fdisk /dev/sda
I have created a swap partition (/dev/sda1) and root partition (/dev/sda2). Then I had to initialize the filesystems:
$ mkswap -L nixosswap /dev/sda1
$ mke2fs -j -L nixos /dev/sda2
Then I had to mount the root partition on /mnt:
$ mount LABEL=nixos /mnt
And I have created a NixOS configuration.nix file and stored it in /mnt/etc/nixos/configuration.nix. My NixOS configuration looks roughly as follows:
{pkgs, ...}:

{
  boot.initrd.kernelModules = [ "uhci_hcd" "ehci_hcd" "ata_piix" ];

  nix.maxJobs = 2;
    
  boot.loader.grub.enable = true;
  boot.loader.grub.version = 2;
  boot.loader.grub.device = "/dev/sda";

  networking.hostName = "i686linux";

  fileSystems."/".device = "/dev/disk/by-label/nixos";

  swapDevices =
    [ { device = "/dev/disk/by-label/nixosswap"; } ];

  services.openssh.enable = true;
}

By running the following instruction NixOS gets installed, taking care of downloading/installing all required packages and composing the entire system configuration:
$ nixos-install
After the installation has succeeded, we can reboot the machine and boot into our freshly installed NixOS installation. The new installation has a root user account without any password. Therefore, it's smart to change the root password to something that's slightly more difficult to guess:
$ passwd
And then the installation is done :-). However, apart from the steps that I have described, it may also be convenient to add a non-privileged user account and install some administration tools.

Upgrading the NixOS installation can be done by running:
$ nixos-rebuild --upgrade switch
The above command-line instruction fetches the latest channel expressions and manifests containing the latest releases of the packages, and then rebuilds the entire system configuration and finally activates it.

As a sidenote: we can also use ordinary Linux distributions as build slaves, but this requires more manual installation and configuration, especially if you want to use it's more advanced features, such as multi-user builds. Moreover, since NixOS is almost a "pure system", it reduces the chances on side effects, which is a bit harder to guarantee with conventional Linux distributions.

Installing a Mac OS X build slave


Mac OS X was already pre-installed on our Mac machine, so I only had to set up a user account and perform some basic settings.

On the Mac OS X machine, I have to install the Nix package manager manually. To do this, I have obtained the Nix bootstrap binaries for x86_64-darwin (the system architecture identifier for a 64-bit Mac OS X machine) and installed it by running the following commands on the terminal:
$ sudo -i
# cd /
# tar xfvj /Users/sander/Downloads/nix-1.5.1-x86_64-darwin.tar.bz2
# chown -R sander /nix
# exit
$ nix-finish-install
By running the above command-line instructions, the /nix directory has been set up containing a Nix store with the Nix package manager and all its dependencies. The last command: nix-finish-install takes care of initializing the Nix database. We run this build as ordinary user, since we don't want to use the superuser for Nix installations.

If we add a Nix package to the user's profile, we also want it to be in the user's PATH, so that we can start a program without specifying its full path. I have appended the following line to both the user profile ~/.profile as well as ~/.bashrc:
$ cat >> ~/.profile <<EOF
source $HOME/.nix-profile/etc/profile.d/nix.sh
EOF
By adding the above code fragment to the user's shell profile, the Nix profile is appended to PATH allowing you to conveniently launch packages without specifying their full path including their hash-codes.

You may probably wonder why this line needs to be added to both .profile and .bashrc? The former case allows you to start packages when a login shell is used, e.g. by launching a terminal from the Mac OS X desktop. The latter case is needed for non-login shells. If the Hydra coordinator remotely executes a command-line instruction through SSH, then the shell is a non-login shell. If we don't add this line to .bashrc then we're unable to run the Nix package manager, because it's not in the PATH.

After performing the above steps, we can run a simple sanity check to see if the Nix package manager works as expected. The following instructions add a Nixpkgs channel and fetches the latest expressions and manifests:
$ nix-channel --add http://nixos.org/channels/nixpkgs-unstable
$ nix-channel --update
After updating the Nix channel, we should be able to install a Nix package into our profile, such as GNU Hello:
$ nix-env -i hello
And we should be able to run it from the command-line as the user's profile is supposed to be in our PATH:
$ hello
Hello, world!
After installing the Nix package manager, there may be some other desirable steps that must be performed. In order to build iOS apps or Mac OS X applications, Apple's Xcode needs to be installed, which must be done through Apple's App Store. We cannot use Nix for this purpose unfortunately. I have given some instructions in a previous blog posts about building iOS apps with the Nix package manager.

It may also be desired to build OpenGL applications for Mac OS X. To make this possible you need to manually install XQuartz first.

Finally, to be able use the Mac OS X machine as build slave we need to configure two other things. First, we must enable the SSH server so that the build machine can be remotely invoked from the coordinator machine. We need to open Mac OS X's system preferences for this, which can be found by clicking on the Apple logo and by picking 'System preferences':


The System preferences screen looks as follows:


By picking the 'Sharing' icon we can configure various services that makes the machine remotely accessible:

As can be observed from the above screenshot, we can enable remote SSH access by enabling the 'Remote login' option. Furthermore, we must configure the hostname to set it to something that we can remember.

Another issue is that we need to turn some power management settings off, because otherwise the Mac machine will turn standby after a while and cannot be used to perform builds. Power management settings can be adapted by picking 'Energy saver' from the System preferences screen, which will show you the following:

I have set the 'Computer sleep' time to 'Never' and I've disabled putting the harddisks to sleep.

Setting up the NixOS build coordinator machine


Then comes the most complicated part -- setting up the build coordinator machine. First, I performed a basic NixOS installation, which installation procedure is exactly the same as the NixOS build slave described earlier. After performing the basic installation, I have adapted its configuration, to turn it into a build coordinator.

Since Hydra is not part of the standard NixOS distribution, we have to obtain it ourselves from Git and store the code in a directory on the filesystem (such as the /root folder):
$ git clone https://github.com/NixOS/hydra.git
Then I have extended the machine's configuration, by adding a number of settings to the attribute set body of /etc/nixos/configuration.nix:

  • To be able to use Hydra's NixOS configuration properties, we must include the Hydra NixOS module:
    require = [ /root/hydra/hydra-module.nix ];
    
  • We must enable the Hydra server and configure some of its mandatory and optional properties:
    services.hydra = {
      enable = true;
      package = (import /root/hydra/release.nix {}).build {
        system = pkgs.stdenv.system;
      };
      logo = ./logo.png;
      dbi = "dbi:Pg:dbname=hydra;host=localhost;user=hydra;";
      hydraURL = "http://nixos";
      notificationSender = "yes@itsme.com";
    };
    

    In the above code fragment, hydra refers to the actual Hydra package, the logo is an optional parameter that can be used to show a logo in the web front-end's header, dbi is a Perl DBI database connection string, configured to make a localhost connection to a PostgreSQL database named: hydra using the hydra user, hydraURL contains the URL to the web front-end, and notificationSender contains the administrator's e-mail address.
  • In order to be able to delegate builds to build slaves for scalability and portability, we have to enable Nix's distributed builds feature:
    nix.distributedBuilds = true;
    nix.buildMachines = [
      { hostName = "i686linux";
        maxJobs = 2;
        sshKey = "/root/.ssh/id_buildfarm";
        sshUser = "root";
        system = "i686-linux";
      }
      
      { hostName = "macosx";
        maxJobs = 2;
        sshKey = "/root/.ssh/id_buildfarm";
        sshUser = "sander";
        system = "x86_64-darwin";
      }
    ];
    
    The above code fragment allows us to delegate 32-bit Linux builds to the NixOS build slave and 64-bit Mac OS X builds to the Mac OS X machine.

  • Hydra needs to store its data, such as projects, jobsets and builds into a database. For production use it's recommended to use PostgreSQL, which can be enabled by adding the following line to the configuration:
    services.postgresql.enable = true;
    
  • The Hydra server runs its own small webserver on TCP port 3000. In production environments, it's better to add a proxy in front of it. We can do this by adding the following Apache HTTP server configuration settings:
    httpd = {
      enable = true;
      adminAddr = "yes@itsme.com";
          
      extraConfig = ''
        <Proxy *>
        Order deny,allow
        Allow from all
        </Proxy>
            
        ProxyRequests     Off
        ProxyPreserveHost On
        ProxyPass         /    http://localhost:3000/ retry=5 disablereuse=on
        ProxyPassReverse  /    http://localhost:3000/
      '';
    };
    
  • To allow e-mail notifications to be sent, we must configure a default mail-server. For example, the following does direct delivery through sendmail:
    networking.defaultMailServer = {
      directDelivery = true;
      hostName = "nixos";
      domain = "nixos.local";
    };
    
  • As you may probably know from earlier blog posts, Nix always stores versions of components next to each other, and components never get overwritten or removed automatically. At some point we may run out of diskspace. Therefore, it's a good idea to enable garbage collection:
    nix.gc = {
      automatic = true;
      dates = "15 03 * * *";
    };
    
    services.cron = {
      enable = true;
          
      systemCronJobs =
        let
          gcRemote = { machine, gbFree ? 4, df ? "df" }:
            "15 03 * * *  root  ssh -x -i /root/.ssh/id_buildfarm ${machine} " +
            ''nix-store --gc --max-freed '$((${toString gbFree} * 1024**3 - 1024 * ''+
            ''$(${df} -P -k /nix/store | tail -n 1 | awk "{ print \$4 }")))' ''+
            ''> "/var/log/gc-${machine}.log" 2>&1'';
        in
        [ (gcRemote { machine = "root@i686linux"; gbFree = 50; })
          (gcRemote { machine = "sander@macosx"; gbFree = 50; })
        ];
    };
    

  • The nix.gc config attribute generates a cron job that runs the garbage collector service at 3:15 AM every night. The services.cron configuration also remotely connects to the build slave machines and runs the garbage collector if a certain threshold has been reached.

  • It may also be worth enabling some advanced features of Nix. For example, in our situation we have many large components that are very similar to each other consuming a lot of diskspace. It may be helpful to enable hard-link sharing, so that identical files are stored only once.

    Moreover, in our current configuration we also download substitutes from the NixOS' Hydra build instance, so that we don't have to build the complete Nixpkgs collection ourselves. It may also be disable this and take full control.

    Another interesting option is to enable chroot builds, reducing the chances on side effects even more:
    nix.extraOptions = ''
      auto-optimise-store = true
      build-use-substitutes = false
    '';
    nix.useChroot = true;
    
    The nix.conf manual page has more information about these extra options.


After adapting the coordinator's configuration.nix, we must activate it by running:
$ nixos-rebuild switch
The above command-line instruction downloads/installs all the required packages and generates all configuration files, such as the webserver and cron jobs.

After rebuilding, we still don't have a working Hydra instance yet. We must still set up its storage, by creating a PostgreSQL database and Hydra user. To do this, we must perform the following instructions as root user:
# createuser -S -D -R -P hydra
# createdb -O hydra hydra
By running the hydra-init job, we can setup its schema or migrate it to a new version:
# start hydra-init
Then we must create a configuration file that allows the unprivileged Hydra user to connect to it:
# su hydra
$ echo "localhost:*:hydra:hydra:password" > ~/.pgpass
$ chmod 600 ~/.pgpass
The .pgpass file contains the hostname, database, username and password which must be replaced by the user's real password, of course.

We also need to set up a user, as the user database is completely empty. The following will create an administration user named 'root' with password 'foobar':

$ hydra-create-user root --password foobar --role admin
And finally we can activate the three Hydra processes, which allows us to use it and to access the web front-end:
$ exit
# start hydra-{evaluator,queue-runner,server}

Setting up connections


We now have a running Hydra instance, but there is still one detail missing. In order to allow the coordinator to connect to the build slaves, we need SSH keys without passphrases allowing us to connect automatically. Generating a SSH keypair can be done as follows:
$ ssh-keygen -t rsa
The above command asks you a couple of questions. You have to keep in mind that we should not specify a passphrase.

Assuming that we have called the file: id_buildfarm, then we have to two files, a private key called: id_buildfarm and a public key called id_buildfarm.pub. We must copy the public key to all build slaves and run the following instruction on each client machine, under the user which performs the build (which is root on the Linux machine and sander on the Mac OS X machine):
$ cat id_buildfarm.pub >> ~/.ssh/authorized_keys
The above command adds the public key to a list of authorized keys, allowing the coordinator to connect to it with the private key.

After installing the public keys, we can try connecting to the build slaves from the coordinator through the private key, by running the following command as root user:
$ ssh -i ~/.ssh/id_buildfarm root@i686linux
If we run the above command, we should be able to connect to the machine without being asked for any credentials. Moreover, the first time that you connect a machine, the host key is added to the known_hosts list, which is necessary because otherwise we won't be able to connect.

Another issue that I have encountered with the Mac OS X machine is that it may stall connections after no input has been received from the coordinator for while. To remedy this, I added the following lines to the SSH config of the root user on the coordinator machine:
$ cat > /root/.ssh/config <<EOF
ServerAliveCountMax 3
ServerAliveInterval 60
EOF

Conclusion


In this blog post, I have described the steps that I have performed to set up a cluster of three Hydra machines consisting of two Linux machines and one Mac OS X machine. To get an impression on how Hydra can be used, I recommend users the read my previous blog post.

In the past, I have also set up some more "exotic" build slaves, such as the three BSDs: FreeBSD, OpenBSD, NetBSD and an OpenSolaris machine. To install these platforms, we can roughly repeat the procedure that I have done to install the Mac OS X build slave. First install the host OS itself, then the Nix package manager (there are bootstrap binaries for several platforms available or you must do a source install), and then set up SSH.

Friday, April 5, 2013

Setting up a Hydra build cluster for continuous integration and testing (part 1)

Recently, I have set up a continuous integration facility at work, because we think that there is a need for such a facility.

While developing software systems, there are all kinds of reasons that may (potentially) break something. For example, by committing a broken piece of source code to a master repository or by upgrading one its dependencies, such as a library, to an incompatible version.

All these potentially breaking changes may result in a product that behaves incorrectly, which may not be delivered in time, because there is usually a very large time window in which it's not known if the system still behaves correctly. In such cases, it is a quite an effort to get all bugs fixed (because the effort of fixing every bug is cumulative) and to predict how long it will take to deliver a new version.

A solution for this is continuous integration (CI) or daily builds -- a practice in which all components of a software system are integrated at least once every day (preferably more often) and automatically verified by an automated build solution reporting about the impact of these changes as quickly as possible. There are many automated integration solutions available, such as Buildbot, Jenkins, Tinderbox, CruiseControl, and Hydra.

Hydra: the Nix-based continuous integration server


As some of you may know, I have written many articles about Nix and their applications on this blog. One of the applications that I have mentioned a few times, but didn't cover in detail is Hydra, a Nix-based continuous integration facility.

For people that know me, it's obvious that I have picked Hydra, but apart from the fact that I'm familiar with it, there are a few other reasons to pick Hydra over other continuous integration solutions:

  • In Hydra, the build environments are managed, which is typically not the case with other CI solutions. In other CI solutions, dependencies happen to be already there and it's up to the system administrator and the tools that are invoked to ensure that they are present and correct. Since Hydra builds on top of Nix, it uses the Nix package manager to install all its required dependencies automatically and on-demand, such as compilers, linkers etc. Moreover, because of Nix's unique facilities, all dependencies are ensured to be present and package installations do not affect other packages, as they are stored safely in isolation from each other.

    Apart from building components, this is also extremely useful for system integration tests. The Nix package manager ensures that all the required prerequisites of a test are present and that they exactly conform to the versions that we need.

    Similarly, because in Nix every dependency is known, we can also safely garbage collect components that are no longer in use without having to worry that the components that are still in use are removed.
  • Hydra has strong support for specifying and managing variability. For us it's important to to be able to build a certain package for many kinds of platforms and various kinds of versions of the same platform. We don't want to accidentally use a wrong version.

    Since Hydra uses Nix to perform builds, dependencies can only be found when they are specified. For example, if we insist on using an older version of the GCC compiler, then Nix ensures that only the older compiler version can be found. This can be done for any build-time dependency, such as the linker, the Java runtime environment, libraries etc.

    With other CI solutions it's much harder to ensure that an already installed version of e.g. the GCC compiler does not conflict with a different version, as it's harder to safely install two versions next to each other and to ensure that the right version is used. For example, if the default compiler in the PATH is GCC 4.7, how can we be absolutely sure that we use an older version without invoking any of the newer compiler's components?
  • Hydra also has good facilities to perform builds and tests on multiple operating systems. When the Nix packager is requested to build something it always takes the requested architecture identifier into account. If the Nix package manager is unable to perform a build for the requested architecture on a given machine, it can delegate a build to another machine capable of building it, giving the caller the impression that the build is performed locally.
  • Another interesting feature is scalability. Since for every build the Nix package manager is used, that invokes functions that are referentially transparent (under normal circumstances) due to its underlying purely functional nature, we can easily divide the build of a package and their dependencies among machines in a network.

Hydra components


Hydra is composed of several components. Obviously, Nix is one of them and does the majority of the work -- it's used to take job definitions specified in the Nix expression language and to execute them in a reliable and reproducible manner (which is analogous to building packages through Nix). Moreover, it can delegate job executions to machines in a network.

Besides Nix, Hydra consists of three other major components:

  • The scheduler regularly checks out configured source code repositories (e.g. managed by a VCS, such as Git or Subversion) and evaluates the Nix expressions that define jobsets. Each derivation in a Nix expression (in many cases a direct or indirect function invocation to stdenv.mkDerivation) gets appended to the build queue. Every derivation corresponds to a job.
  • The queue runner builds the derivations that are queued by the evaluator through the Nix package manager.
  • The server component is a web application providing end-users a pretty interface which they can use to configure projects and jobsets and inspect the status of the builds.

Usage: Projects


The following screenshot shows the entry page of the Hydra web front-end showing an overview of projects. On the highest level, every job that Hydra executes belongs to a certain project:

Administrators are free to choose any name and description of a project. Projects can be created, edited and deleted through the Hydra web interface. In this example, I have defined four projects: Disnix (a tool part of the Nix project which I have developed), the Nix android test case, the Nix xcode test case, and KitchenSink -- a showcase example for the Titanium SDK, a cross-platform mobile development kit.

Clicking on a project will show you its jobsets.

Usage: Jobsets


Every project contains zero or more jobsets. A jobset contains one or more jobs that execute something, typically a build or a test case. The following screenshot shows you the jobsets that I have defined for the nix-androidenvtests project:


As we can see, we have defined only one jobset, which regularly checks out the master Git repository of the example case and builds it. Jobsets can be defined to build from any source code repository and can have any name. Quite frequently, I use jobsets to define sub projects or to make a distinction between branches of the same repository, e.g. it may be desirable to execute the same jobs on a specific experimental branch of a source code repository.

In the Disnix project, for example, I define jobsets for every sub project:


Besides a listing of jobsets, the screen also shows you their statuses. The green colored icons indicate how many jobs have succeeded, and the red colored icons show you the amount of jobs that have failed. Grey icons show the amount of jobs that are queued and still have to be executed.

To create jobsets, we have to perform two steps. First, we must define a jobset Nix expression and store it in a source code repository, such as a Git repository or on the local filesystem. The nix-androidenvtests-master jobset expression looks as follows:

{nixpkgs, system}:

let
  pkgs = import nixpkgs { inherit system; };
in
rec {
  myfirstapp_debug = import ./myfirstapp {
    inherit (pkgs) androidenv;
    release = false;
  };
  
  myfirstapp_release = import ./myfirstapp {
    inherit (pkgs) androidenv;
    release = true;
  };
  
  emulate_myfirstapp_debug = import ./emulate-myfirstapp {
    inherit (pkgs) androidenv;
    myfirstapp = myfirstapp_debug;
  };
  
  emulate_myfirstapp_release = import ./emulate-myfirstapp {
    inherit (pkgs) androidenv;
    myfirstapp = myfirstapp_release;
  };
}

The above Nix expression defines the nix-androidenvtests-master jobset, shown in the first jobsets screenshot. A jobset Nix expression is a function returning an attribute set, in which each attribute defines a job. Every job can be either a direct function invocation to a function that builds/executes something, or a function taking dependencies of the given job returning a function invocation that builds/execute something.

Functions in a jobset expression are used to define its variation points. For example, in our above expression we have only two of them -- the nixpkgs parameter refers to a checkout of the Nixpkgs repository, that contains a collection of common packages, such as compilers, linkers, libraries, end-user software etc. The system parameter refers to the system architecture that we want to build for, which can be (for example) 32-bit Linux machines, 64-bit Linux machines, 64-bit Mac OS X and so on.

Nix expressions defining packages and Nix expressions defining jobsets are quite similar. Both define a function (or functions) that build something from given function arguments. They both cannot be used to build something directly, but we must also compose the builds by calling them with the right function arguments. Function arguments specify (for example) which versions of packages we want to use and for what kind of system architectures we want to build. For ordinary Nix packages, this is done in the all-packages.nix composition expression. For jobsets, we use the Hydra web interface to configure them.

In the following screenshot, we configure the nix-androidenvtests jobset (done by picking the 'Create jobset' function from the Projects menu) and their parameters through the Hydra web interface:

As can be observed from the screenshot, we configure the nix-androidenvtests jobset as follows:

  • The identifier is a unique name identifying the jobset, and a description can be anything we want.
  • The Nix expression field defines what jobset Nix expression file we must use and in which input it is stored. The Nix expression that we have just shown, is stored inside the Nix android tests GitHub repository. Its relative path is deployment/default.nix. The name of the input that refers to the Git checkout named nixandroidenvtests, which is later declared in the build inputs section.
  • We can configure e-mail notifications so that you will be informed if a certain builds succeeds or fails.
  • We can also specify how many successful builds we want to keep. Older builds or failed builds will be automatically garbage collected once in a while.
  • In the remaining section of the screen, we can configure the jobset's build inputs, which basically provide parameters to the functions that we have seen in the jobset Nix expression.

    The first build input is a Git checkout of the master branch of the Android testcase. As we have seen earlier, this build input provides the Nix jobset expression. The second parameter provides a checkout of the Nixpkgs repository. In our case it's configured to take the last checkout of the master branch, but we can also configure it to take e.g. a specific branch. The third parameter specifies for which platforms we want to build. We have specified three values: 32-bit Linux (i686-linux), 64-bit Linux (x86_64-linux), and 64-bit Mac OS X (x86-64-darwin). If multiple values are specified for a specific input, Hydra will evaluate the Cartesian product of all inputs, meaning that (in this case) we build the same jobs for 32-bit + 64-bit Linux + 64-bit Mac OS X simultaneously.

The Android testcase is a relatively simple project with simple parameters. Hydra has many more available options. The following screenshot shows the jobset configuration of disnix-trunk:


In the above configuration, apart from strings and VCS checkouts, we use a number of other interesting parameter types:

  • The disnix jobset contains a job named tarball that checks out the source code and packages it in a compressed tarball, which is useful for source releases. To build the binary package, we use the tarball instead of the checkout directly.

    We can reuse the output of an existing Hydra job (named tarball) by setting a build input type to: 'Build output'. The build output is allowed to be of any platform type, which is not a problem as the tarball should always be the same.
  • We can also refer to a build of a different jobset, which we have done for disnix_activation_scripts, which a dependency of Disnix. Since we have to run that package, it must be built for the same system architecture, which can be forced by setting the type to: 'Build output (same system)'.

The queue


If we have defined a jobset and if it's enabled, then the evaluator should regularly check it, and their corresponding jobs should appear in the queue view (which can be accessed from the menu by picking Status -> Queue):

Inspecting job statuses


To inspect the build results, we can pick a jobset and open the 'Job status' tab. The following image shows the results of the nix-androidenvtests-master jobset:


As can observed from the screenshot, we have executed its jobs on three platforms and all the builds seem to have succeeded.

Obtaining build products


In addition to building jobs and performing testcases, it is also desired to obtain the build artifacts produced by Hydra so that we can run or inspect them locally. There are various ways to do this. The most obvious way is to click on a build result, that will redirect you its status page.

The following page shows you the result for the emulate_myfirstapp_release, a job that produces a script starting the Android emulator running the release version of the demo app:

By taking the url to which an one click install link points and by running the following instruction (Nix is required to be present on that machine):

$ nix-install-package --url http://nixos/build/169/nix/pkg/MyFirstApp-x86_64-linux.nixpkg

The package gets downloaded and imported in the Nix store of the machine and can be accessed under the same Nix store path as the status page shows (which is /nix/store/z8zn57...-MyFirstApp. By running the following command-line instruction:

$ /nix/store/z8zn57...-MyFirstApp/bin/run-test-emulator
We can automatically launch an emulator instance running the recently built app.

For mobile apps (that are typically packaged inside an APK bundle for Android or an IPA bundle for iOS), or documents (such as PDF files) it may be a bit inconvenient to fetch and import the remote builds into the Nix store of the local machine to view or to test them. Therefore, it is also possible to declare build products inside a build job.

For example, I have adapted the buildApp {} functions for Android and iOS apps, shown in my earlier blog posts, to declare the resulting bundle as build product, by appending the following instructions to the build procedure:

mkdir -p $out/nix-support
echo "file binary-dist \"$(echo $out/*.apk)\"" \
    > $out/nix-support/hydra-build-products

The above shell commands adds a nix-support/hydra-build-products file to the resulting package, which is a file that Hydra can parse. The first two columns define the type and subtype for the build product, the third column specifies the path to the build product.

As a result, building an App with Hydra produces the following status page:

As can be observed, the page provides a convenient download link that allows us to download the APK file. Another interesting benefit is that I can use my Android phone to browse to this result page and to install it automatically, saving me a lot of effort. ;)

The most powerful mechanism to obtain builds is probably the Nix channel mechanism. Every project and jobset have a channel, which info page can be accessed by selecting the 'Channel' option from the project or jobset menu. The following screenshot shows the Nix channel page for the nix-androidenvtests-master jobset:


The above page displays some instructions that users having the Nix package manager installed must follow. By adding the Nix channel and updating it, the user receives a collection of Nix expressions and a manifest with a list of binaries. By installing a package that's defined in the channel, the Nix package gets downloaded automatically and added to the user's Nix profile.

Experience


We are using Hydra for a short time now at Conference Compass, primarily for producing mobile apps for the Android and iPhone (which is obvious to people that know me). It's already showing us very promising results. To make this possible I'm using the Nix functions that I have described in earlier blog posts.

I have also worked with the Hydra developers to make some small changes to make my life more convenient. I have contributed a patch that allows build products with spaces in them (as this is common for Apps), and I have adapted/fixed the included NixOS configuration module so that Hydra can be conveniently installed in NixOS using a reusable module. I'd like to thank the Hydra developers: Eelco Dolstra, Rob Vermaas, Ludovic Courtès, and Shea Levy for their support.

Conclusion


In this blog post, I have given an incomplete tour of Hydra's features with some examples I care about. Apart from the features that I have described, Hydra has a few more facilities, such as creating views and releases and various management facilities. The main reason for me to write this blog post is to keep the explanation that I have given to my colleagues as a reference.

This blog post describes Hydra from a user/developer perspective, but I also had to deploy a cluster of build machines, which is also interesting to report about. In the next blog post, I will describe what I did to set up a Hydra cluster.

References


For people that are eager to try Hydra: Hydra is part of the Nix project and available as free and open-source software under the GPLv3 license.

There are also a number of publications available about Nix buildfarms. The earliest prototype is described in Eelco Dolstra's PhD thesis. Furthermore, on Eelco's publications page a few papers about the Nix buildfarm can be found.

Another thing that may be interesting to read about is an old blog of me titled: 'Using NixOS for declarative deployment and testing' describing how we can execute distributed system integration tests using NixOS. Hydra can be used to call NixOS' system integration test facilities. In fact, we already do this quite extensively to test the NixOS distribution itself. Moreover, Disnix also has a very comprehensive distributed integration test suite.

Sunday, March 10, 2013

Some interesting improvements to NiJS

After my tiny JavaScript depression, I have made some interesting improvements to NiJS which I'd like to describe in this blog post.

Defining non-native types in JavaScript


As explained in my previous blog post about NiJS, we can "translate" most JavaScript language objects to equivalent/similar Nix expression language constructs. However, the Nix expression language has certain value types, such as files and a URLs, that JavaScript does not have. In the old NiJS implementation, objects of these types can be artificially created by adding a _type object member that refers to a string containing url or file.

For various reasons, I have never really liked that approach very much. Therefore, I have adapted NiJS to use prototypes to achieve the same goal, as I now have discovered how to properly use them in JavaScript (which is not logical if you'd ask me). To create a file, we must create an object that is an instance of the NixFile prototype. URLs can be created by instantiating NixURL and recursive attribute sets by instantiating NixRecursiveAttrSet.

By adapting the GNU Hello NiJS package from the previous NiJS blog post using prototypes, we end up with the following CommonJS module:

var nijs = require('nijs');

exports.pkg = function(args) {
  return args.stdenv().mkDerivation ({
    name : "hello-2.8",
    
    src : args.fetchurl({
      url : new nijs.NixURL("mirror://gnu/hello/hello-2.8.tar.gz"),
      sha256 : "0wqd8sjmxfskrflaxywc7gqw7sfawrfvdxd9skxawzfgyy0pzdz6"
    }),
  
    doCheck : true,

    meta : {
      description : "A program that produces a familiar, friendly greeting",
      homepage : new nijs.NixURL("http://www.gnu.org/software/hello/manual"),
      license : "GPLv3+"
    }
  });
};

In my opinion, this looks a bit better and more logical.

Using CommonJS modules in embedded JavaScript functions


In the previous NiJS blog post, I have also shown that we can wrap JavaScript functions in a function proxy and call them from Nix expressions as if they were Nix functions.

One of the things that is a bit inconvenient is that these functions must be self-contained -- they cannot refer to anything outside the function's scope and we have to provide everything the function needs ourselves.

For me it's also useful to use third party libraries, such as the Underscore library. I have adapted the nijsFunProxy with two extra optional parameters. The modules parameter can be used to refer to external Node.JS packages (provided by Nixpkgs), that are added to the buildInputs of the proxy. The requires parameter can be used to generate require() invocations that import the CommonJS modules that we want to use.

The following example expression invokes a JavaScript function that utilises and imports the Underscore library to convert an array of integers to an array of strings:

{stdenv, nodejs, underscore, nijsFunProxy}:

let
  underscoreTestFun = numbers: nijsFunProxy {
    function = ''
      function underscoreTestFun(numbers) {
        var words = [ "one", "two", "three", "four", "five" ];
        var result = [];

        _.each(numbers, function(elem) {
          result.push(words[elem - 1]);
        });

        return result;
      }
    '';
    args = [ numbers ];
    modules = [ underscore ];
    requires = [
      { var = "_"; module = "underscore"; }
    ];
  };
in
stdenv.mkDerivation {
  name = "underscoreTest";

  buildCommand = ''
    echo ${toString (underscoreTestFun [ 5 4 3 2 1 ])} > $out
  '';
}

In the above expression, the requires parameter to the proxy generates the following line before the function definition, allowing us to properly use the functions provided by the Underscore library:

var _ = require('underscore');

Calling asynchronous JavaScript functions from Nix expressions


This nijsFunProxy is interesting, but most of the functions in the NodeJS API and external NodeJS libraries are executed asynchronously, meaning that they will return immediately and invoke a callback function when the work is done. We cannot use these functions properly from a proxy that is designed to support synchronous functions.

I have extended the nijsFunProxy to take the async parameter, which defaults to false. When the async parameter is set to true, it does not take the return value of the function, but waits until a callback function is called (nijsCallbacks.onSuccess) with a JavaScript object as parameter serving the equivalent of a return value. This can be used to invoke asynchronous JavaScript functions from a Nix function.

{stdenv, nijsFunProxy}:

let
  timerTest = message: nijsFunProxy {
    function = ''
      function timerTest(message) {
        setTimeout(function() {
          nijsCallbacks.onSuccess(message);
        }, 3000);
      }
    '';
    args = [ message ];
    async = true;
  };
in
stdenv.mkDerivation {
  name = "timerTest";
  
  buildCommand = ''
    echo ${timerTest "Hello world! The timer test works!"} > $out
  '';
}

The above Nix expression shows a simple example, in which we create a timeout event after three seconds. The timer callback calls the nijsCallbacks.onSuccess() callback function to provide a return value containing the message that the user has given as a parameter to the Nix function invocation.

Writing inline JavaScript code in Nix expressions


NiJS makes it possible to use JavaScript instead of the Nix expression language to describe package definitions and their compositions, although I have no reasons to recommend the former over the latter.

However, the examples that I have shown so far in the blog posts use generic build procedures that basically execute the standard GNU Autotools build procedure: ./configure; make; make install. We may also have to implement custom build steps for packages. Usually this is done by specifying custom build steps in Bash shell code embedded in strings.

Not everyone likes to write shell scripts and to embed them in strings. Instead, it may also be desirable to use JavaScript for the same purpose. I have made this possible by creating a nijsInlineProxy function, that generates a string with shell code executing NodeJS to execute a piece of JavaScript code within the same build process.

The following Nix expression uses the nijsInlineProxy to implement the buildCommand in JavaScript instead of shell code:

{stdenv, nijsInlineProxy}:

stdenv.mkDerivation {
  name = "createFileWithMessage";
  buildCommand = nijsInlineProxy {
    requires = [
      { var = "fs"; module = "fs"; }
      { var = "path"; module = "path"; }
    ];
    code = ''
      fs.mkdirSync(process.env['out']);
      var message = "Hello world written through inline JavaScript!";
      fs.writeFileSync(path.join(process.env['out'], "message.txt"), message);
    '';
  };
}

The above Nix expression creates a Nix store output directory and writes a message.txt file into the corresponding output directory.

As with ordinary Nix expressions, we can refer to the parameters passed to stdenv.mkDerivation as well as the output folder by using environment variables. Inline JavaScript code has the same limitations as embedded JavaScript functions, such as the fact that we can't refer to global variables.

Writing inline JavaScript code in NiJS package modules


If we create a NiJS package module, we also have to use shell code embedded in strings to implement custom build steps. We can also use inline JavaScript code by creating an object that is an instance of the NixInlineJS prototype. The following code fragment is the NiJS equivalent of the previous Nix expression:

var nijs = require('nijs');

exports.pkg = function(args) {
  return args.stdenv().mkDerivation ({
    name : "createFileWithMessageTest",
    buildCommand : new nijs.NixInlineJS({
      requires : [
        { "var" : "fs", "module" : "fs" },
        { "var" : "path", "module" : "path" }
      ],
      code : function() {
        fs.mkdirSync(process.env['out']);
        var message = "Hello world written through inline JavaScript!";
        fs.writeFileSync(path.join(process.env['out'], "message.txt"), message);
      }
    })
  });
};

The constructor of the NixInlineJS prototype can take two types of parameters. It can take a string containing JavaScript code or a JavaScript function (that takes no parameters). The latter case has the advantage that its syntax can be checked by an interpreter/compiler and that we can use syntax highlighting in editors.

By using inline JavaScript code in NiJS, we can create Nix packages by only using JavaScript. Isn't that awesome? :P

Conclusion


In this blog post, I have described some interesting improvements to NiJS, such as the fact that we can create Nix packages by only using JavaScript. NiJS seems to be fairly complete in terms of features now.

If you want to try it, the source can be obtained from the NiJS GitHub page, or installed from Nixpkgs or by NPM.

Another thing that's in my mind now is whether I can do the same stuff for a different programming language. Maybe when I'm bored or I have a use case for it, I'll give it a try.

Thursday, February 28, 2013

Yet another blog post about Object Oriented Programming and JavaScript

As mentioned in my previous blog post, I'm doing some extensive JavaScript programming lately. It almost looks like I have become a religious JavaScript fanatical these days, but I can ensure you that I'm not. :-)

Moreover, I used to be a teaching assistant for TU Delft's concepts of programming languages course a few years ago. In that course, we taught several programming languages that are conceptually different from the first programming language students have to learn, which is Java. Java was mainly used to give students some basic programming knowledge and to teach them the basics of structured programming and object oriented programming with classes.

One of the programming languages covered in the 'concepts of programming languages' course was JavaScript. We wanted to show students that it's also possible to do object-oriented programming in a language without classes, but with prototypes. Prototypes can be used to simulate classes and class inheritance.

When I was still a student I had to do a similar exercise, in which I had to implement an example case with prototypes to simulate the behaviour of classes. It was an easy exercise, and the explanation that was given to me looked easy. I also did the exercise quite well.

Today, I have to embarrassingly admit that there are a few bits (a.k.a. nasty details) that I did not completely understand and it was driving me nuts. These are the bits that almost nobody tells you and are omitted in most explanations that you will find on the web or given by teachers. Because the "trick" that allows you to simulate class inheritance by means of prototypes is often already given, we don't really think about what truly happens. One day you may have to think about all the details and then you will probably run into the same frustrating moments as I did, because the way prototypes are used in JavaScript is not logical.

Furthermore, because this subject is not as trivial as people may think, there are dozens of articles and blog posts on the web, in which authors are trying the clarify the concepts. However, I have seen that a lot of these explanations are very poor, do not cover all the relevant details (and are sometimes even confusing) and implement solutions that are suboptimal. Many blog posts also copy the same "mistakes" from each other.

Therefore, I'm writing yet another blog post in which I will explain what I think about this and what I did to solve this problem. Another benefit is that this blog article is going to prevent me to keep telling others the same stuff over and over again. Moreover, considering my past career, I feel that it's my duty to do this.

Object Oriented programming


The title of this blog post contains: 'Object Oriented' (which we can abbreviate with OO). But what is OO anyway? Interestingly enough, there is no single (formal) definition of OO and not everyone shares the same view on this.

A possible (somewhat idealized) explanation is that OO programs can be considered a collection of objects that interact with each other. Objects encapsulate state (or data) and behaviour. Furthermore, they often have an analogy to objects that exist in the real world, such as a car, a desk, a person, or a shape, but this is not always the case.

For example, a person can have a first name and last name (data) and can walk and talk (behaviour). Shapes could have the following properties (state): a color, a width, height or a radius (depending on the type of shape, if they are rectangular or a circle). From these properties, we can calculate the area and the perimeter (behaviour).

When designing OO programs, we can often derive the kind of objects that we need from the nouns and the way they behave and interact from the verbs from a specification that describes what a program should do.

For example, consider the following specification:
We have four shapes. The first one is a red colored rectangle that has a width of 2 and an height of 4. The second shape is a green square having a width and height of 2. The third shape is a blue circle with a radius of 3. The fourth shape is a yellow colored circle with a radius of 4. Calculate the areas and perimeters of the shapes.

This specification may result in a design that looks as follows:



The figure above shows a UML object diagram, containing four rectangled shapes. Each of these shapes represent an object. The top section in a shape contains the name of the object, the middle section contains their properties (called attributes in UML) and state, and the bottom section contains its behaviour (called operations in UML).

To implement an OO program we can use an OO programming language, such as C++, Java, C#, JavaScript, Objective C, Smalltalk and many others, which is what most developers often do. However, it's not required to use an OO programming language to be Object Oriented. You can also keep track of the representation of the objects yourself. In C for example (which is not an OO language), you can use structs and pointers to functions to achieve roughly the same thing, although the compiler is not able to statically check whether you're doing the right thing.

Class based Object Oriented programming


In our example case, we have designed only four objects, but in larger programs we may have many circles, rectangles and squares. As we may observe from the object diagram shown earlier, our objects have much in common. We have designed multiple circles (we have two of them) and they have exactly the same attributes (a color and a radius) and behaviour. The only difference between each circle object is their state, e.g. they have a different color and radius value, but the way we calculate the area and perimeter remains the same.

From that observation, we could say that every circle object belongs to the same class. The same thing holds for the rectangles and squares. As it's very common to have multiple objects that have the same properties and behaviour, most OO programming languages require developers to define classes that capture these common properties. Objects in class based OO languages are (almost) never created directly, but by instantiation of a particular class.

For example, consider the following example implemented in the Java programming language that defines Person class (every person has a first and a last name from which a full name can be generated):

public class Person
{
    public String firstName, lastName;
    
    public Person(String firstName, String lastName)
    {
        this.firstName = firstName;
        this.lastName = lastName;
    }
    
    public String generateFullName()
    {
        return firstName + " " + lastName;
    }
}

Basically, the above class definition consists of three blocks. The upper block defines the attributes (first and last name), the middle block defines a constructor that is used to create a person object -- Every person is supposed to have a first and a last name. The bottom block is an operation (called method in Java) that generates the person's full name from its first and last name.

By using the new operator in Java in combination with the constructor, we can create objects that are instances of a Person class, for example:
Person sander = new Person("Sander", "van der Burg");
Person john = new Person("John", "Doe");
And we can use the generateFullName() method to display the full names of the persons. The way this is done is common for all persons:

System.out.println(sander.generateFullName()); /* Sander van der Burg */
System.out.println(john.generateFullName()); /* John Doe */

Instantiating classes have a number of advantages over directly creating objects with their state and behaviour:

  • The class definition ensures that every object instance have their required properties and behaviour.
  • The class definition often serves the role as a contract. We cannot give an object a property or method that is not defined in a class. Furthermore, the constructor function forces users to provide a first and last name. As a result, we cannot create person object with no first and last name.
  • As objects are always instances of a class, we can easily determine whether an object belongs to a particular class. In Java this can be done using the instanceof operator:
    sander instanceof Person; /* true */
    sander instanceof Rectangle; /* false */
    
  • The behaviour, i.e. the methods can be shared over all object instances, as they are the same for every object. For example, we don't have to implement the function that generates the full name for every person object that we create.

Returning to our previous example with the shapes: Apart from defining classes that capture commonalities of every object instance (i.e. we could define classes for Rectangles, Squares and Circles), we can also observe that each of these shape classes have commonalities. For example, every shape (regardless of whether it's a rectangle, square or circle) has a color, and for each shape we can calculate an area and perimeter (although the way that's done differs for each type).

We can capture common properties and behaviour of classes in so called super classes, allowing us to treat all shapes the same way. By adapting the earlier design of the shapes using classes and super classes, we could end-up with a new design that will look something like this:



In the above figure a UML class diagram is shown:
  • The class diagram looks similar to the previous UML object diagram. The main difference is that the rectangled shapes are classes (not objects). Their sections still have the same meaning.
  • The arrows denote extends (or inheritance) relationships between classes. Inheriting from a super class (parent) creates a sub class (child) that extends the parent's attributes and behaviour.
  • On top, we can see the Shape class capturing common properties (color) and behaviour (calculate area and perimeter operations) of all shapes.
  • A Rectangle is a Shape extended to have a width and height. A Circle is a shape having a radius. Moreover, both use different formulas to calculate their areas and perimeters. Therefore, the calculateArea() and calculatePerimeter() operations from the Shape class are overridden. In fact: they are abstract in the Shape class (because there is no general way to do it for all shapes) enforcing us to do so.
  • The Square class is a child class of Rectangle, because we can define a squares as rectangles with equal widths and heights.

When we invoke a method of an object, first its class is consulted. If it's not provided by the class and the class has a parent class, then the parent class is consulted, then its parent's parent and so on. For example, if we run the following Java code fragment:

Square square = new Square(2);
System.out.println(square.calculateArea()); /* 4 */

The calculateArea() method call is delegated to the calculateArea() method provided by the Rectangle class (since the Square class does not provide one), which calculates the square object's area.

The instanceof() operator in Java also takes inheritance into account. For example, the following statements all yield true:

square instanceof Rectangle; /* true, Square class inherits from Rectangle */
square instanceof Shape; /* true, Square class indirectly inherits from Shape */

However the following statement yields false (since the square object is not an instance of Circle or any of its sub classes):

square instanceof Circle; /* false */

Prototype based Object Oriented programming


Apart from the length, the attractive parts described in this blog post about OO programming is that OO programs often (ideally?) draw a good analogy to what happens in the real world. Class based OO languages enable reuse and sharing. Moreover, they offer some means of enforcing that objects constructed the right way, i.e. that they have their required properties and intended behaviour.

Most books and articles explaining OO programming immediately use the term classes. This is probably due to the fact that most commonly used OO languages are class based. I have intentionally omitted the term classes for a while. Moreover, not everyone thinks that classes are needed in OO programming.

Some people don't like classes -- because they often serve as contracts, they can also be too strict sometimes. To deviate from a contract, either inheritance must be used that extends (or restricts?) a class or wrapper classes must be created around them (e.g. through the adapter design pattern). This could result in many layers of glue code, significantly complicating programs and growing them unnecessarily big, which is not uncommon for large systems implemented in Java.

There is also a different (and perhaps a much simpler) way to look at OO programming. In OO languages such as JavaScript (and Self, which greatly influenced JavaScript) there are no classes. Instead, we create objects, their properties and behaviour directly.

In JavaScript, most language constructs are significantly simpler than Java:

  • Objects are associative arrays in which each member refers to other objects.
  • Arrays are objects with numerical indexes.
  • Functions are also objects and can be assigned to variables and object members. This his how behaviour can be implemented in JavaScript. Functions also have attributes and methods. For example, toString() returns the function's implementation code and length() returns the number of parameters the function requires.

"Simpler" languages such as JavaScript have a number of benefits. It's easier to learn as fewer concepts have to be understood/remembered and easier to implement by language implementers. However, as we create objects, their state and behaviour directly, you may wonder how we can share common properties and behaviour among objects or how we can determine whether an object belongs to a particular class? There is a different mechanism to achieve such goals: delegation to prototypes.

In prototype based OO languages, such as Self and JavaScript, every object has a prototype. Prototypes are also objects (having their own prototype). The only exception in JavaScript is the null object, which prototype is a null reference. When requesting an attribute or invoking a method of an object, first the object itself is consulted. If the object does not implement it, it consults its prototype, then the prototype's prototype and so on.

Although prototype based OO languages do not provide classes, we can simulate class behaviour (including inheritance) through prototypes. For example, we can simulate the two particular person objects that are an instance of a Person class (as shown earlier) as follows:



The above figure shows an UML object diagram (not a Class diagram, since we don't have classes ;) ) in which we simulate class instantation:
  • As explained earlier in this blog post, objects belonging to a particular class have common behaviour, but their state differs. Therefore, the attributes and their state have to be defined and stored in every object instance (as can be observed from the object diagram from the fact that every person has its own first and last name property).
  • We can capture common behaviour among persons in a common object, that will serve as the prototype of every person. This object serves the equivalent role of a class. Moreover, since each person instance can refer to exactly the same prototype object, we also have sharing and reuse.
  • When we invoke a method on a person object, such as generateFullName() then the method invocation is delegated to the Person prototype object, which will give us the person's full name. This exactly offers us the same behaviour as we have in a class based OO language.

By using multiple layers of indirection through prototypes we can simulate class inheritance. For example, to define a super class of a class, we set the prototype's prototype to an object capturing the behaviour of the super class. The following UML object diagram shows how we can simulate our earlier example with shapes:



As can be observed from the picture: if we would invoke the calculateArea() method on the square object (shown at the bottom), then the invocation is delegated to the square prototype's prototype (which is an object representing the Rectangle class). That method will calculate the area of the square for us.

We can also use prototypes to determine whether a particular object is an instance of a (simulated) class. In JavaScript this is done by checking the prototype chain to see whether there is an object that has exactly the same properties as the simulated class.

Simulating classes in JavaScript


So far I think the concepts of simulating classes and class inheritance through prototypes are clear. In short, there are three things we must remember:

  • A class can be simulated by creating a singleton object capturing its behaviour (and state that is common to all object instances).
  • Instantiation of a class can be simulated by creating an object having a prototype that refers to the simulated class object and by calling the constructor that sets its state.
  • Class inheritance can be simulated by setting a simulated class object's prototype to the simulated parent class object.

The remaining thing I have to explain is how to implement our examples in JavaScript. And this is where the pain/misery starts. The main reason of my frustration comes from the fact that every object in JavaScript has a prototype, but we cannot (officially) see or touch them directly.

You may probably wonder why have I used the word 'officially'? In fact, there is a way to see or touch prototypes in Mozilla-based browsers, through the hidden __proto__ object member, but this is a non-standard feature, does not officially exist, should not be used and does not work in many other JavaScript implementations. So in practice, there is no way to access an object's prototype directly.

So how do we 'properly' work with prototypes in JavaScript? We must use the new operator, that looks very much like Java's new operator, but don't let it fool you! In Java, new is called in combination with the constructor defined in a class to create an object instance. Since we don't have classes in JavaScript, nor language constructs that allow us to define constructors, it achieves its goal in a different way:

function Person(firstName, lastName) {
    this.firstName = firstName;
    this.lastName = lastName;
}

var sander = new Person("Sander", "van der Burg");
var john = new Person("John", "Doe");

In the above JavaScript code fragment, we define the constructor of the Person class as a (ordinary) function, that sets a person's first and last name. We have to use the new operator in combination with this function to create a person object.

What does JavaScript's new operator do? It creates an empty object, calls the constructor function that we have provided with the given parameters, and sets this to the empty object that it has just created. The result of the new invocation is an object having a first and last name property, and a prototype containing a constructor property that refers to our constructor function.

So how do we create a person object that has the Person class object as its prototype, so that we can share its behaviour and determine to which class an object belongs? In JavaScript, we can set the prototype of an object to be constructed as follows:

function Person(firstName, lastName) {
    this.firstName = firstName;
    this.lastName = lastName;
}

Person.prototype.generateFullName = function() {
    return this.firstName + " " + this.lastName;
};

/* Shows: Sander van der Burg */
var sander = new Person("Sander", "van der Burg");

/* Shows: John Doe */
var john = new Person("John", "Doe");

document.write(sander.generateFullName() + "<br>\n");
document.write(john.generateFullName() + "<br>\n");

To me the above code sample looks a bit weird. In the code block after the function definition, we adapt the prototype object member of the constructor function? What the heck is this and how does this work?

In fact, as I have explained earlier, functions are also objects in JavaScript and we can also assign properties to them. The prototype property is actually not the prototype of the function (as I have said: prototypes are invisible in JavaScript). In our case, it's just an ordinary object member.

However, the prototype member of an object is used by the new operator that creates objects. If we call new in combination with a function that has a prototype property, then the resulting object's real prototype refers to that prototype object. We can use this to allow an object instance's prototype to refer to a class object. Moreover, the resulting prototype always refers to the same prototype object (namely Person.prototype), that allows us to share common class properties and behaviour. Got it? :P

If you didn't get it (for which I can't blame you): The result of the new invocations in our last code fragment exactly yields the object diagram that I have shown in the previous section containing person objects that refer to a Person prototype.

Simulating class inheritance in JavaScript


Now that we "know" how to create objects that are instances of a class in JavaScript, there is even a bigger pain. How to simulate class inheritance in JavaScript? This was something that really depressed me.

As explained earlier, to create a class that has a super class, we must set the prototype of a class object to point to the super class object. However, since we cannot access or change object's prototypes directly, it's a bit annoying to do this. Let's say that we want to create an object that is an instance of a Rectangle (which inherits from Shape) and calculate its area. A solution that I have seen in quite some articles on the web is:

/* Shape constructor */
function Shape(color) {
    this.color = color;
}

/* Shape behaviour (does nothing) */
Shape.prototype.calculateArea = function() {
    return null;
};

Shape.prototype.calculatePerimeter = function() {
    return null;
};

/* Rectangle constructor */
function Rectangle(color, width, height) {
    Shape.call(this, color); /* Call the superclass constructor */
    this.width = width;
    this.height = height;
}

/* Rectangle inherits from Shape */
Rectangle.prototype = new Shape();
Rectangle.prototype.constructor = Rectangle;

/* Rectangle behaviour */
Rectangle.prototype.calculateArea = function() {
    return this.width * this.height;
};

Rectangle.prototype.calculatePerimeter = function() {
    return 2 * this.width + 2 * this.height;
};

/* Create a rectangle instance and calculate its area */
var rectangle = new Rectangle("red", 2, 4);
document.write(rectangle.calculateArea() + "<br>\n");

The "trick" to simulate class inheritance they describe is to instantiate the parent class (without any parameters), setting that object as the child class prototype object and then adding the child class' properties to it.

The above solution works in most cases, but it's not very elegant and a bit inefficient. Indeed, the resulting object has a prototype that refers to the parent's prototype, but we have a number of undesired side effects too. By running the base class' constructor we will do some obsolete work - it now assigns undefined properties to a newly created object that we don't need. Because of this, the Rectangle prototype now looks like this:



It also stores undefined class properties in the Rectangle class object, which is unnecessary. We only need an object that has a prototype referring to the parent class object, nothing more. Furthermore, a lot of people may also build checks in constructors that may throw exceptions if certain parameters are unspecified.

A better solution would be to create an empty object which prototype refers to the parent's class object, which we can extend with our child class properties. To do that we can use a dummy constructor function that just returns an empty object:

function F() {};

Then we set the prototype property of the dummy function to the Shape constructor function's prototype object member (the object representing the Shape class):

F.prototype = Shape.prototype;

Then we call the new operator in combination with F (our dummy constructor function). We'll get an empty object having the Shape class object as its prototype. We can use this object as a basis for the prototype that defines the Rectangle class:

Rectangle.prototype = new F();

Then we must fix the Rectangle class object's constructor property to point to the Rectangle constructor, because now it has been set to the parent's constructor function (due to calling the new operation previously):

Rectangle.prototype.constructor = Rectangle;

Finally, we can add our own class methods and properties to the Rectangle prototype, such as calculateArea() and calculatePerimeter(). Still got it? :P

Since the earlier procedure is so weird and complicated (I don't blame you if you don't get it :P), we can also encapsulate the weirdness in a function called inherit() that will do this for any class:

function inherit(parent, child) {
    function F() {}; 
    F.prototype = parent.prototype; 
    child.prototype = new F();
    child.prototype.constructor = child;
}
The above function takes a parent constructor function and child constructor as function parameters. It points the child constructor function's prototype property to an empty object which prototype points to the super class object. After calling this function, we can extend the subclass prototype object with class members of the child class. By using the inherit() function, we can rewrite our earlier code fragment as follows:

/* Shape constructor */
function Shape(color) {
    this.color = color;
}

/* Shape behaviour (does nothing) */
Shape.prototype.calculateArea = function() {
    return null;
};

Shape.prototype.calculatePerimeter = function() {
    return null;
};

/* Rectangle constructor */
function Rectangle(color, width, height) {
    Shape.call(this, color); /* Call the superclass constructor */
    this.width = width;
    this.height = height;
}

/* Rectangle inherits from Shape */
inherit(Shape, Rectangle);

/* Rectangle behaviour */
Rectangle.prototype.calculateArea = function() {
    return this.width * this.height;
};

Rectangle.prototype.calculatePerimeter = function() {
    return 2 * this.width + 2 * this.height;
};

/* Create a rectangle instance and calculate its area */
var rectangle = new Rectangle("red", 2, 4);
document.write(rectangle.calculateArea() + "<br>\n");

The above example is the best solution I can recommend to properly implement simulated class inheritance.

Discussion


In this lengthy blog post I have explained two ways of doing object programming: with classes and with prototypes. Both approaches makes sense to me. Each of them have advantages and disadvantages. It's up to developers to make a choice.

However, what bothers me is the way prototypes are implemented in JavaScript and how they should be used. From my explanation, you will probably conclude that it completely sucks and makes things extremely complicated. This is probably the main reason why there is so much stuff on this subject on the web.

Some people argue that classes should be added to JavaScript. For me personally, there is nothing wrong with using prototypes. The only problem is that JavaScript does not allow people to use them properly. Instead, JavaScript exposes itself as a class based OO language, while it's prototype based. The Self language (which influenced JavaScript) for instance, does not hide its true nature.

Another minor annoyance is that if you want to properly simulate class inheritance in JavaScript, the best solution is probably to steal my inherit() function described here. You can probably find dozens of similar functions in many other places on the web.

You may probably wonder why JavaScript sucks when it comes to OO programming? JavaScript was originally developed by Netscape as part of their web browser, in a rough period known as the browser wars, in which there was heavy competition between Netscape and Microsoft.

At some point Netscape bundled the Java platform with its web browser allowing developers to embed Java Applets in web pages, which are basically advanced computer programs. They also wanted to offer a light weight alternative for less technical users that had to look like Java, which had to be implemented in a short time span.

They didn't want to design and implement a new language completely from scratch. Instead, they took an existing language (probably Self) and adapted it to have curly braces and Java keywords, to make it look a bit more like Java. Although Java and JavaScript have some syntactic similarities, they are in fact fundamentally different languages. JavaScript hides its true nature, such as the fact that it's prototype based. Nowadays, its popularity has significantly increased and we use it for many other purposes in addition to the web browser.

Fortunately, the latest ECMAScript standard and recent JavaScript implementations ease the pain a little. New implementations have the Object.create() method allowing you to directly create an object with a given prototype. There is also a Object.getPrototypeOf() which gives you read-only access to a prototype of any object. However, to use these functions you need a modern browser or JavaScript runtime (which a lot of people don't have). It will probably take several years before everybody has updated their browsers to versions that support these.

References


In the beginning of this blog post, I gave a somewhat idealized explanation of Object Oriented programming. There is no uniform definition of Object-Oriented programming and its definition seems to be still in motion. On the web I found an interesting blog post written by William Cook, titled: 'A Proposal for Simplified, Modern Definitions of "Object" and "Object Oriented', which may be interesting to read.

Moreover, this is not the only blog post about programming languages concepts that I wrote. A few months ago I also wrote an unconventional blog post about (purely) functional programming languages. It's a bit unorthodox, because I draw an analogy to package management.

Finally, Self is a relatively unknown language for the majority of developers in the field. Although it started a long time ago and it's considered an ancient language, to me it looks very simple and powerful. It's a good lesson for people who want to design and implement an OO language. I could recommend everybody to have a look at the following video lecture from Stanford University about Self. Moreover, Self is still maintained and can be obtained from the Self language website.

Friday, January 25, 2013

NiJS: An internal DSL for Nix in JavaScript

Lately, I have been doing some JavaScript programming in environments such as Node.js.

For some reason, I need to perform deployment activities from JavaScript programs. To do that, I could (re)implement the deployment mechanisms that I need from scratch, such a feature that builds packages and a feature that fetches dependencies from external repositories.

What happens in practice is that people exactly do this -- they create programming language specific package managers, implementing features that are already well supported by generic tools. Nowadays, almost every modern programming language environment has one, such as the Node.js Package Manager, CPAN, HackageDB, Eclipse plugin manager etc.

Each language-specific package manager implements deployment activities in their own way. Some of these package managers have good deployment properties, others have annoying drawbacks. Some of them can easily integrate or co-exist with the host's systems package manager, others cannot. Most importantly, they don't offer the non-functional properties I care about, such as reliable and reproducible deployment.

For me, Nix offers the stuff I want. Therefore, I want to integrate it with my programs implemented in a general purpose programming language.

A common integration solution is to generate Nix expressions through string manipulation and to invoke the nix-build (or nix-env) processes to build it. For every deployment operation or configuration step, a developer is required to generate an expression that builds or configures something (by string manipulation, that is unparsed and unchecked) and pass that to Nix, which is inelegant, laborious, tedious, error-prone, and results in more code that needs to be maintained.

As a solution for this, I have created NiJS: An internal DSL for Nix in JavaScript.

Calling Nix functions from JavaScript


As explained ealier, manually generating Nix expressions as strings have various drawbacks. In earlier blog posts, I have explained that in Nix (which is a package manager borrowing concepts from purely functional programming languages) every build action is modeled as a function and the expressions that we typically want to evaluate (or generate) are function invocations.

To me, it looks like a better and more elegant idea to be able to call these Nix functions through function calls from the implementation language, rather than generating these function invocations as strings. This is how the idea of NiJS was born.

For example, it would be nice to have the following JavaScript function invocation that calls the stdenv.mkDerivation function of Nixpkgs, which is used to build a particular package from source code and its given build-time dependencies:
stdenv().mkDerivation ({
  name : "hello-2.8",
    
  src : args.fetchurl({
    url : "mirror://gnu/hello/hello-2.8.tar.gz",
    sha256 : "0wqd8sjmxfskrflaxywc7gqw7sfawrfvdxd9skxawzfgyy0pzdz6"
  }),
  
  doCheck : true,

  meta : {
    description : "A program that produces a familiar, friendly greeting",
    homepage : {
      _type : "url",
      value : "http://www.gnu.org/software/hello/manual"
    },
    license : "GPLv3+"
  }
});
The above JavaScript example, defines how to build GNU Hello -- a trivial example package -- from source code. We can easily translate that function call to the following Nix expression containing a function call to stdenv.mkDerivation:

let
  pkgs = import <nixpkgs> {};
in
pkgs.stdenv.mkDerivation {
  name = "hello-2.8";
    
  src = pkgs.fetchurl {
    url = "mirror://gnu/hello/hello-2.8.tar.gz";
    sha256 = "0wqd8sjmxfskrflaxywc7gqw7sfawrfvdxd9skxawzfgyy0pzdz6";
  };
  
  doCheck = true;

  meta = {
    description = "A program that produces a familiar, friendly greeting";
    homepage = http://www.gnu.org/software/hello/manual;
    license = "GPLv3+";
  }
}
The above expression can be passed to nix-build to actually build GNU Hello.

To actually make the JavaScript function call work, I had to define the mkDerivation() JavaScript function as follows:
var mkDerivation = function(args) {
    return {
      _type : "nix",
      value : "pkgs.stdenv.mkDerivation "+nijs.jsToNix(args)
    };
}
The function takes an object as a parameter and returns an object with the type property set to nix (more details on this later), and a value property that is a string containing the generated Nix expression.

As you may notice from the code example, only little string manipulation is done. The Nix expression is generated almost automatically. We compose the Nix expression that needs to be evaluated, by putting the name of the Nix function that we want to call in a string and we append the output of the jsToNix() function invocation to args, the argument of the JavaScript function.

The jsToNix() function is a very powerful one -- it takes objects from the JavaScript language and translates them to Nix language objects, in a generic and straight forward manner. For example, it translates a JavaScript object into a Nix attribute set (with the same properties), a JavaScript array into an expression having a list, and so on.

The resulting object (with the nix type) can be passed to the callNixBuild() function, which adds the import <nixpkgs> {}; statement to the beginning of the expression and invokes nix-build to evaluate it, which as a side-effect, builds GNU Hello.

Defining packages in NiJS


We have just shown how to invoke a Nix function from JavaScript, by creating a trivial proxy that translates the JavaScript function call to a string with a Nix function call.

I have intentionally used stdenv.mkDerivation as an example. While I could have picked another example, such as writeTextFile that writes a string to a text file, I did not do this.

The stdenv.mkDerivation function is a very important function in Nix -- it's directly and indirectly called from almost every Nix expression building a package. By creating a proxy to this function from JavaScript, we have created a new range of possibilities -- we can also use this proxy to write package build recipes and their compositions inside JavaScript instead of the Nix expression language.

As explained in earlier blog posts, a Nix package description is a file that defines a function taking its required build-inputs as function arguments. The body of the function describes how to build the package from source code and its build-time dependencies.

In JavaScript, we can also do something like this. We can define each package as a separate CommonJS module that exports a pkg property. The pkg property refers to a function declaration, in which we describe how to build a package from source code and its dependencies provided as function arguments:

exports.pkg = function(args) {
  return args.stdenv().mkDerivation ({
    name : "hello-2.8",
    
    src : args.fetchurl({
      url : "mirror://gnu/hello/hello-2.8.tar.gz",
      sha256 : "0wqd8sjmxfskrflaxywc7gqw7sfawrfvdxd9skxawzfgyy0pzdz6"
    }),
  
    doCheck : true,

    meta : {
      description : "A program that produces a familiar, friendly greeting",
      homepage : {
        _type : "url",
        value : "http://www.gnu.org/software/hello/manual"
      },
      license : "GPLv3+"
    }
  });
};

The result of the function call is a string that contains our generated Nix expression.

As explained in earlier blog posts about Nix, we cannot use these function declarations to build a package directly, but we have to compose them by calling the function with it arguments. These function arguments provide a particular version of a dependency. In NiJS, we can compose packages in a CommonJS module that looks like this:

var pkgs = {

  stdenv : function() {
    return require('./pkgs/stdenv.js').pkg;
  },

  fetchurl : function(args) {
    return require('./pkgs/fetchurl.js').pkg(args);
  },

  hello : function() {
    return require('./pkgs/hello.js').pkg({
      stdenv : pkgs.stdenv,
      fetchurl : pkgs.fetchurl
    });
  },
  
  zlib : function() {
    return require('./pkgs/zlib.js').pkg;
  },
  
  file : function() {
    return require('./pkgs/file.js').pkg({
      stdenv : pkgs.stdenv,
      fetchurl : pkgs.fetchurl,
      zlib : pkgs.zlib
    });
  },
  
  ...
};

exports.pkgs = pkgs;

The above module defines a pkgs property that contains an object in which each member refers to a function. Each function invokes a CommonJS module that builds a package, such as the GNU Hello example that we have shown earlier, and passes its required dependencies as function arguments. The dependencies are defined in the same composition object, such as stdenv and fetchurl.

Apart from the language, the major difference between this composition module and the top-level composition expression in Nix, is that we have added an extra function indirection for each object member. In Nix, the composition expression is an attribute set with function calls. Because Nix is a lazy-language, these function calls only get evaluated when they are needed.

JavaScript is an eager language and will evaluate all function invocations when the composition object is generated. To prevent this, we have wrapped these invocations in functions that need to be called. This also explains why we refer to stdenv (and any other package) as a function call in the GNU Hello example.

By importing the composition expression shown earlier and by passing the result of the application of one of its function members to callNixBuild(), a package such as GNU Hello can be built:
var nijs = require('nijs');
var pkgs = require('pkgs.js').pkgs;

nijs.callNixBuild({
  nixObject : pkgs.hello(),
  onSuccess : function(result) {
    process.stdout.write(result + "\n");
  },
  onFailure : function(code) {
    process.exit(code);
  }
});

The above fragment asynchronously builds GNU Hello and writes its resulting Nix store path to the standard output (done by the onSuccess() callback function). As building individual packages from a composition specification is a common use case, I have created a command-line utility: nijs-build that can automatically do this, which is convenient for testing:

$ nijs-build pkgs.js -A hello
/nix/store/xkbqlb0w5snmrxqi6ysixfszx1wc7mqd-hello-2.8

The above command-line instruction builds GNU Hello defined in our composition JavaScript specification.

Translating JavaScript objects to Nix expression language objects


We have seen that the jsToNix() function performs most of the "magic". Most of the mappings from JavaScript to Nix are straight forward, mostly by using JavaScript's typeof operator:

  • Boolean and number values can be converted verbatim
  • Strings and XML objects can be almost taken verbatim, but quotes must be placed around them and they must be properly escaped.
  • Objects in JavaScript are a bit trickier, as arrays are also objects. We must use the Array.isArray() method to check for this. We can recursively convert objects into either Nix lists or Nix attribute sets.
  • null values can't be determined by type. We must check for the null reference explicitly.
  • Objects that have an undefined type will throw an exception.
Nix provides a number of types that are not in JavaScript. Therefore, we had to artificially create them:

  • Recursive attribute sets can be generated by adding the: _recursive = true member to an object.
  • URLs can be defined by creating an object with the: _type = "url" member and a value member containing the URL in a string.
  • Files can be defined by creating an object with the: _type = "file" member and a value containing the file path. File names with spaces are also allowed and require a special trick in the Nix expression language to allow it to work properly. Another tricky part is that file names can be absolute or relative. In order to make the latter case work, we have to know the path of to the CommonJS module that is referencing a file. Fortunately the module.filename property inside a CommonJS module will exactly tell us that. By passing the module parameter to the file object, we can make this work.

We also need to distinguish between parts that have already been converted. The proxies, such as the one to stdenv.mkDerivation are just pieces of Nix expression code that must be used without conversion. For that, I have introduced objects with the nix type, as shown earlier that are just placed into the generated expression verbatim.

Then there is still one open question. JavaScript also has objects that have the function type. What to do with these? Should we disallow them, or is there a way in which we can allow them to be used in our internal DSL?

Converting JavaScript functions to Nix expressions


I'm not aware of any Nix expression language construct that is semantically equivalent (or similar) to a JavaScript function. The Nix expression language has functions, but these are executed lazily and can only use primops in the Nix expression language. Therefore, we cannot really "compile" JavaScript functions to Nix functions.

However, I do know a way to call arbitrary processes from Nix expressions (through a derivation) and to expose them as functions that return Nix language objects, by converting the process' output to Nix expressions that are imported. I have used this trick earlier in our SEAMS paper to integrate deployment planning algorithms. I can also use the same trick to allow JavaScript functions to be called from Nix expressions:

{stdenv, nodejs}:
{function, args}:

let
  nixToJS = ...
in
import (stdenv.mkDerivation {
  name = "function-proxy";
  buildInputs = [ nodejs ];
  buildCommand = ''
    (
    cat <<EOF
    var nijs = require('${./nijs.js}');
    var fun = ${function};
    
    var args = [
      ${stdenv.lib.concatMapStrings (arg: nixToJS arg+",\n") args}
    ];
    
    var result = fun.apply(this, args);
    process.stdout.write(nijs.jsToNix(result));
    EOF
    ) | node > $out
  '';
})
The above code fragment shows how the nijsFunProxy function is defined that can be used to create a proxy to a JavaScript function and allows somebody to call it as if it was a Nix function.

The proxy takes a JavaScript function definition in a string and a list of function arguments that can be of any Nix expression language type. Then the parameters are converted to JavaScript objects (through our inverse nixToJS() function) and a JavaScript file is generated that performs the function invocation with the converted parameters. Finally, the resulting JavaScript object returned by the function is converted to a Nix expression and written to the Nix store. By importing the generated Nix expression we can return an equivalent Nix language object.

In JavaScript, every function definition is in fact an object that is an instance of the Function prototype. The length property will tell us the number of command-line arguments, the toString() method will dump a string representation of the function definition, and apply() can be used to evaluate the function object with an array of arguments.

By using these properties, we can generate a Nix function wrapper with an invocation to nijsFunProxy that looks like this:

fun =
  arg0: arg1:

  nijsFunProxy {
    function = ''
      function sumTest(a, b) {                                                                                                          
        return a + b;                                                                                                                                              
      }
    '';
    args = [ arg0 arg1 ];
  };

By creating such wrappers calling the nijsFunProxy, we can "translate" JavaScript functions to Nix and call them from Nix expressions. However, there are a number of caveats:

  • We cannot use variables outside the scope of the function, e.g. global variables.
  • We must always return something. If nothing is returned, we will have an undefined object, which cannot be converted. Nix 'void functions' don't exist.
  • Functions with a variable number of positional arguments are not supported, as Nix functions don't support this.

It may be a bit inconvenient to use self-contained JavaScript functions, as we cannot access anything outside the scope of JavaScript function. Fortunately, the proxy can also include specified CommonJS modules, that provide standard functionality. See the package's documentation for more details on this.

Calling JavaScript functions from Nix expressions


Of course, the nijsFunProxy can also be used directly from ordinary Nix expressions, if desired. The following expression uses a JavaScript function that adds two integers. It writes the result of an addition to a file in the Nix store:
{stdenv, nijsFunProxy}:

let
  sum = a: b: nijsFunProxy {
    function = ''
      function sum(a, b) {
        return a + b;
      }
    '';
    args = [ a b ];
  };
in
stdenv.mkDerivation {
  name = "sum";
  
  buildCommand = ''
    echo ${toString (sum 1 2)} > $out
  '';
}

Conclusion


In this blog post, I have described NiJS: an internal DSL for Nix in JavaScript. It offers the following features:

  • A way to easily generate function invocations to Nix functions from JavaScript.
  • A translation function that maps JavaScript language objects to Nix expression language objects.
  • A way to define and compose Nix packages in JavaScript.
  • A way to invoke JavaScript functions from Nix expressions

I'd also like to point out that NiJS is not supposed to be a Nix alternative. It's rather a convenient means to use Nix from a general purpose language (in this case: JavaScript). Some of the additional use cases were implications of having a proxy and interesting to experiment with. :-)

Finally, I think NiJS may also give people that are unfamiliar with the Nix expression language the feeling that they don't have to learn it, because packages can be directly created from JavaScript in an environment that they are used to. I think this is a false sense of security. Although we can build packages from JavaScript, under the hood still Nix expressions are being generated that may yield errors. Errors from NiJS objects are much harder to debug. In such cases it's still required to know what happens.

Moreover, I find JavaScript and the asynchronous programming model of Node.js ugly and very error prone, as the language does not restrict you of doing all kinds of harmful things. But that's a different discussion. People may also find this blog post about Node.js interesting to read.

Related work


Maybe this internal DSL approach on top of an external DSL approach sounds crazy, but it's not really unique to NiJS, nor is NiJS the only internal DSL approach for Nix:

  • GNU Guix is an internal DSL for Nix in Scheme (through GNU Guile). Guix is a more sophisticated approach with the intention of deploying a complete GNU distribution. It also generates lower-level Nix store derivation files, instead of Nix expressions. NiJS has a much simpler implementation, fewer use-cases and a different goal.
  • ORM systems such as Hibernate can also be considered an internal DSL (Java) on-top-of an external DSL (SQL) approach. They allow developers to treat records in database tables as (collections of) objects in a general purpose programming language. They offer various advantages, such as less boilerplate code, but also disadvantages, such as the ORM mismatch, resulting in issues, such as performance penalties. NiJS has also issues related due to mismatches between the source and target language, such as debugging issues.

Availability


NiJS can be obtained from the NiJS GitHub page and used under the MIT license. The package also includes a number of example packages in JavaScript and a number of example JavaScript function invocations from Nix expressions.