As explained in many of my earlier blog posts, one of its major objectives is to facilitate high-level deployment specifications of running processes that can be translated to configurations for all kinds of process managers and deployment solutions.
The backends that I have implemented so far, were picked for the following reasons:
- Multiple operating systems support. The most common process management service was chosen for each operating system: On Linux, sysvinit (because this used to be the most common solution) and systemd (because it is used by many conventional Linux distributions today), bsdrc on FreeBSD, launchd for macOS, and cygrunsrv for Cygwin.
- Supporting unprivileged user deployments. To supervise processes without requiring a service that runs on PID 1, that also works for unprivileged users, supervisord is very convenient because it was specifically designed for this purpose.
- Docker was selected because it is a very popular solution for managing services, and process management is one of its sub responsibilities.
- Universal process management. Disnix was selected because it can be used as a primitive process management solution that works on any operating system supported by the Nix package manager. Moreover, the Disnix services model is a super set of the processes model used by the process management framework.
Not long after writing my blog post about the process manager-agnostic abstraction layer, somebody opened an issue on GitHub with the suggestion to also support s6-rc. Although I was already aware that more process/service management solutions exist, s6-rc was a solution that I did not know about.
Recently, I have implemented the suggested s6-rc backend. Although deploying s6-rc services now works quite conveniently, getting to know s6-rc and its companion tools was somewhat challenging for me.
In this blog post, I will elaborate about my learning experiences and explain how the s6-rc backend was implemented.
The s6 tool suite
s6-rc is a software projected published on skarnet and part of a bigger tool ecosystem. s6-rc is a companion tool of s6: skarnet.org's small & secure supervision software suite.
On Linux and many other UNIX-like systems, the initialization process (typically /sbin/init) is a highly critical program:
- It is the first program loaded by the kernel and responsible for setting the remainder of the boot procedure in motion. This procedure is responsible for mounting additional file systems, loading device drivers, and starting essential system services, such as SSH and logging services.
- The PID 1 process supervises all processes that were directly loaded by it, as well as indirect child processes that get orphaned -- when this happens they get automatically adopted by the process that runs as PID 1.
As explained in an earlier blog post, traditional UNIX services that daemonize on their own, deliberately orphan themselves so that they remain running in the background. - When a child process terminates, the parent process must take notice or the terminated process will stay behind as a zombie process.
Because the PID 1 process is the common ancestor of all other processes, it is required to automatically reap all relevant zombie processes that become a child of it. - The PID 1 process runs with root privileges and, as a result, has full access to the system. When the security of the PID 1 process gets compromised, the entire system is at risk.
- If the PID 1 process crashes, the kernel crashes (and hence the entire system) with a kernel panic.
There are many kinds of programs that you can use as a system's PID 1. For example, you can directly use a shell, such as bash, but is far more common to use an init system, such as sysvinit or systemd.
According to the author of s6, an init system is made out of four parts:
- /sbin/init: the first userspace program that is run by the kernel at boot time (not counting an initramfs).
- pid 1: the program that will run as process 1 for most of the lifetime of the machine. This is not necessarily the same executable as /sbin/init, because /sbin/init can exec into something else.
- a process supervisor.
- a service manager.
In the s6 tool eco-system, most of these parts are implemented by separate tools:
- The first userspace program: s6-linux-init takes care of the coordination of the initialization process. It does a variety of one-time boot things: for example, it traps the ctrl-alt-del keyboard combination, it starts the shutdown daemon (that is responsible for eventually shutting down the system), and runs the initial boot script (rc.init).
(As a sidenote: this is almost true -- the /sbin/init process is a wrapper script that "execs" into s6-linux-linux-init with the appropriate parameters). - When the initialization is done, s6-linux-init execs into a process called s6-svscan provided by the s6 toolset. s6-svscan's task is to supervise an entire process supervision tree, which I will explain later.
- Starting and stopping services is done by a separate service manager started from the rc.init script. s6-rc is the most prominent option (that we will use in this blog post), but also other tools can be used.
Many conventional init systems, implement most (or sometimes all) of these aspects in a single executable.
In particular, the s6 author is highly critical of systemd: the init system that is widely used by many conventional Linux distributions today -- he dedicated an entire page with criticisms about it.
The author of s6 advocates a number of design principles for his tool eco-system (that systemd violates in many ways):
- The Unix philosophy: do one job and do it well.
- Doing less instead of more (preventing feature creep).
- Keeping tight quality control over every tool by only opening up repository access to small teams only (or rather a single person).
- Integration support: he is against the bazaar approach on project level, but in favor of the bazaar approach on an eco-system level in which everybody can write their own tools that integrate with existing tools.
The concepts implemented by the s6 tool suite were not completely "invented" from scratch. daemontools is what the author considers the ancestor of s6 (if you look at the web page then you will notice that the concept of a "supervision tree" was pioneered there and that some of the tools listed resemble the same tools in the s6 tool suite), and runit its cousin (that is also heavily inspired by daemontools).
A basic usage scenario of s6 and s6-rc
Although it is possible to use Linux distributions in which the init system, supervisor and service manager are all provided by skarnet tools, a sub set of s6 and s6-rc can also be used on any Linux distribution and other supported operating systems, such as the BSDs.
Root privileges are not required to experiment with these tools.
For example, with the following command we can use the Nix package manager to deploy the s6 supervision toolset in a development shell session:
$ nix-shell -p s6
In this development shell session, we can start the s6-svscan service as follows:
$ mkdir -p $HOME/var/run/service $ s6-svscan $HOME/var/run/service
The s6-svscan is a service that supervises an entire process supervision tree, including processes that may accidentally become a child of it, such as orphaned processes.
The directory parameter is a scan directory that maintains the configurations of the processes that are currently supervised. So far, no supervised process have been deployed yet.
We can actually deploy services by using the s6-rc toolset.
For example, I can easily configure my trivial example system used in previous blog posts that consists of one or multiple web application processes (with an embedded HTTP server) returning static HTML pages and an Nginx reverse proxy that forwards requests to one of the web application processes based on the appropriate virtual host header.
Contrary to the other process management solutions that I have investigated earlier, s6-rc does not have an elaborate configuration language. It does not implement a parser (for very good reasons as explained by the author, because it introduces extra complexity and bugs).
Instead, you have to create directories with text files, in which each file represents a configuration property.
With the following command, I can spawn a development shell with all the required utilities to work with s6-rc:
$ nix-shell -p s6 s6-rc execline
The following shell commands create an s6-rc service configuration directory and a configuration for a single webapp process instance:
$ mkdir -p sv/webapp $ cd sv/webapp $ echo "longrun" > type $ cat > run <<EOF $ #!$(type -p execlineb) -P envfile $HOME/envfile exec $HOME/webapp/bin/webapp EOF
The above shell script creates a configuration directory for a service named: webapp with the following properties:
- It creates a service with type: longrun. A long run service deploys a process that runs in the foreground that will get supervised by s6.
- The run file refers to an executable that starts the service. For s6-rc services it is common practice to implement wrapper scripts using execline: a non-interactive scripting language.
The execline script shown above loads an environment variable config file with the following content: PORT=5000. This environment variable is used to configure the TCP port number to which the service should bind to and then "execs" into a new process that runs the webapp process.
(As a sidenote: although it is a common habit to use execline for writing wrapper scripts, this is not a hard requirement -- any executable implemented in any language can be used. For example, we could also write the above run wrapper script as a bash script).
We can also configure the Nginx reverse proxy service in a similar way:
$ mkdir -p ../nginx $ cd ../nginx $ echo "longrun" > type $ echo "webapp" > dependencies $ cat > run <<EOF $ #!$(type -p execlineb) -P foreground { mkdir -p $HOME/var/nginx/logs $HOME/var/cache/nginx } exec $(type -p nginx) "-p" "$HOME/var/nginx" "-c" "$HOME/nginx/nginx.conf" "-g" "daemon off;" EOF
The above shell script creates a configuration directory for a service named: nginx with the following properties:
- It again creates a service of type: longrun because Nginx should be started as a foreground process.
- It declares the webapp service (that we have configured earlier) a dependency ensuring that webapp is started before nginx. This dependency relationship is important to prevent Nginx doing a redirect to a non-existent service.
- The run script first creates all mandatory state directories and finally execs into the Nginx process, with a configuration file using the above state directories, and turning off daemon mode so that it runs in the foreground.
In addition to configuring the above services, we also want to deploy the system as a whole. This can be done by creating bundles that encapsulate collections of services:
mkdir -p ../default cd ../default echo "bundle" > type cat > contents <<EOF webapp nginx EOF
The above shell instructions create a bundle named: default referring to both the webapp and nginx reverse proxy service that we have configured earlier.
Our s6-rc configuration directory structure looks as follows:
$ find ./sv ./sv ./sv/default ./sv/default/contents ./sv/default/type ./sv/nginx/run ./sv/nginx/type ./sv/webapp/dependencies ./sv/webapp/run ./sv/webapp/type
If we want to deploy the service directory structure shown above, we first need to compile it into a configuration database. This can be done with the following command:
$ mkdir -p $HOME/etc/s6/rc $ s6-rc-compile $HOME/etc/s6/rc/compiled-1 $HOME/sv
The above command creates a compiled database file in: $HOME/etc/s6/rc/compiled-1 stored in: $HOME/sv.
With the following command we can initialize the s6-rc system with our compiled configuration database:
$ s6-rc-init -c $HOME/etc/s6/rc/compiled-1 -l $HOME/var/run/s6-rc \ $HOME/var/run/service
The above command generates a "live directory" in: $HOME/var/run/s6-rc containing the state of s6-rc.
With the following command, we can start all services in the: default bundle:
$ s6-rc -l $HOME/var/run/s6-rc -u change default
The above command deploys a running system with the following process tree:
As as can be seen in the diagram above, the entire process tree is supervised by s6-svscan (the program that we have started first). Every longrun service deployed by s6-rc is supervised by a process named: s6-supervise.
Managing service logging
Another important property of s6 and s6-rc is the way it handles logging. By default, all output that the supervised processes produce on the standard output and standard error are captured by s6-svscan and written to a single log stream (in our case, it will be redirected to the terminal).
When it is desired to capture the output of a service into its own dedicated log file, you need to configure the service in such a way that it writes all relevant information to a pipe. A companion logging service is required to capture the data that is sent over the pipe.
The following command-line instructions modify the webapp service (that we have created earlier) to let it send its output to another service:
$ cd sv $ mv webapp webapp-srv $ cd webapp-srv $ echo "webapp-log" > producer-for $ cat > run <<EOF $ #!$(type -p execlineb) -P envfile $HOME/envfile fdmove -c 2 1 exec $HOME/webapp/bin/webapp EOF
In the script above, we have changed the webapp service configuration as follows:
- We rename the service from: webapp to webapp-srv. Using suffixes is a convention commonly used for s6-rc services that also have a log companion service.
- With the producer-for property, we specify that the webapp-srv is a service that produces output for another service named: webapp-log. We will configure this service later.
- We create a new run script that adds the following command: fdmove -c 2 1.
The purpose of this added instruction is to redirect all output that is sent over the standard error (file descriptor: 2) to the standard output (file descriptor: 1). This redirection makes it possible that all data can be captured by the log companion service.
We can configure the log companion service: webapp-log with the following command-line instructions:
$ mkdir ../webapp-log $ cd ../webapp-log $ echo "longrun" > type $ echo "webapp-srv" > consumer-for $ echo "webapp" > pipeline-name $ echo 3 > notification-fd $ cat > run <<EOF #!$(type -p execlineb) -P foreground { mkdir -p $HOME/var/log/s6-log/webapp } exec -c s6-log -d3 $HOME/var/log/s6-log/webapp EOF
The service configuration created above does the following:
- We create a service named: webapp-log that is a long running service.
- We declare the service to be a consumer for the webapp-srv (earlier, we have already declared the companion service: webapp-srv to be a producer for this logging service).
- We configure a pipeline name: webapp causing s6-rc to automatically generate a bundle with the name: webapp in which all involved services are its contents.
This generated bundle allows us to always manage the service and logging companion as a single deployment unit. - The s6-log service supports readiness notifications. File descriptor: 3 is configured to receive that notification.
- The run script creates the log directory in which the output should be stored and starts the s6-log service to capture the output and store the data in the corresponding log directory.
The -d3 parameter instructs it to send a readiness notification over file descriptor 3.
After modifying the configuration files in such a way that each longrun service has a logging companion, we need to compile a new database that provides s6-rc our new configuration:
$ s6-rc-compile $HOME/etc/s6/rc/compiled-2 $HOME/sv
The above command creates a database with a new filename in: $HOME/etc/s6/rc/compiled-2. We are required to give it a new name -- the old configuration database (compiled-1) must be retained to make the upgrade process work.
With the following command, we can upgrade our running configuration:
$ s6-rc-update -l $HOME/var/run/s6-rc $HOME/etc/s6/rc/compiled-2
The result is the following process supervision tree:
As you may observe by looking at the diagram above, every service has a companion s6-log service that is responsible for capturing and storing its output.
The log files of the services can be found in $HOME/var/log/s6-log/webapp and $HOME/var/log/s6-log/nginx.
One shot services
In addition to longrun services that are useful for managing system services, more aspects need to be automated in a boot process, such as mounting file systems.
These kinds of tasks can be automated with oneshot services, that execute an up script on startup, and optionally, a down script on shutdown.
The following service configuration can be used to mount the kernel's /proc filesystem:
mkdir -p ../mount-proc cd ../mount-proc echo "oneshot" > type cat > run <<EOF $ #!$(type -p execlineb) -P foreground { mount -t proc proc /proc } EOF
Chain loading
The execline scripts shown in this blog post resemble shell scripts in many ways. One particular aspect that sets execline scripts apart from shell scripts is that all commands make intensive use of a concept called chain loading.
Every instruction in an execline script executes a task, may imperatively modify the environment (e.g. by changing environment variables, or changing the current working directory etc.) and then "execs" into a new chain loading task.
The last parameter of each command-line instruction refers to the command-line instruction that it needs to "execs into" -- typically this command-line instruction is put on the next line.
The execline package, as well as many packages in the s6 ecosystem, contain many programs that support chain loading.
It is also possible to implement custom chain loaders that follow the same protocol.
Developing s6-rc function abstractions for the Nix process management framework
In the Nix process management framework, I have added function abstractions for each s6-rc service type: longrun, oneshot and bundle.
For example, with the following Nix expression we can generate an s6-rc longrun configuration for the webapp process:
{createLongRunService, writeTextFile, execline, webapp}: let envFile = writeTextFile { name = "envfile"; text = '' PORT=5000 ''; }; in createLongRunService { name = "webapp"; run = writeTextFile { name = "run"; executable = true; text = '' #!${execline}/bin/execlineb -P envfile ${envFile} fdmove -c 2 1 exec ${webapp}/bin/webapp ''; }; autoGenerateLogService = true; }
Evaluating the Nix expression above does the following:
- It generates a service directory that corresponds to the: name parameter with a longrun type property file.
- It generates a run execline script, that uses a generated envFile for configuring the service's port number, redirects the standard error to the standard output and starts the webapp process (that runs in the foreground).
- The autoGenerateLogService parameter is a concept I introduced myself, to conveniently configure a companion log service, because this a very common operation -- I cannot think of any scenario in which you do not want to have a dedicated log file for a long running service.
Enabling this option causes the service to automatically become a producer for the log companion service (having the same name with a -log suffix) and automatically configures a logging companion service that consumes from it.
In addition to constructing long run services from Nix expressions, there are also abstraction functions to create one shots: createOneShotService and bundles: createServiceBundle.
The function that generates a log companion service can also be directly invoked with: createLogServiceForLongRunService, if desired.
Generating a s6-rc service configuration from a process-manager agnostic configuration
The following Nix expression is a process manager-agnostic configuration for the webapp service, that can be translated to a configuration for any supported process manager in the Nix process management framework:
{createManagedProcess, tmpDir}: {port, instanceSuffix ? "", instanceName ? "webapp${instanceSuffix}"}: let webapp = import ../../webapp; in createManagedProcess { name = instanceName; description = "Simple web application"; inherit instanceName; process = "${webapp}/bin/webapp"; daemonArgs = [ "-D" ]; environment = { PORT = port; }; overrides = { sysvinit = { runlevels = [ 3 4 5 ]; }; }; }
The Nix expression above specifies the following high-level configuration concepts:
- The name and description attributes are just meta data. The description property is ignored by the s6-rc generator, because s6-rc has no equivalent configuration property for capturing a description.
- A process manager-agnostic configuration can specify both how the service can be started as a foreground process or as a process that daemonizes itself.
In the above example, the process attribute specifies that the same executable needs to invoked for both a foregroundProcess and daemon. The daemonArgs parameter specifies the command-line arguments that need to be propagated to the executable to let it daemonize itself.
s6-rc has a preference for managing foreground processes, because these can be more reliably managed. When a foregroundProcess executable can be inferred, the generator will automatically compose a longrun service making it possible for s6 to supervise it.
If only a daemon can be inferred, the generator will compose a oneshot service that starts the daemon with the up script, and on shutdown, terminates the daemon by dereferencing the PID file in the down script. - The environment attribute set parameter is automatically translated to an envfile that the generated run script consumes.
- Similar to the sysvinit backend, it is also possible to override the generated arguments for the s6-rc backend, if desired.
As already explained in the blog post that covers the framework's concepts, the Nix expression above needs to be complemented with a constructors expression that composes the common parameters of every process configuration and a processes model that constructs process instances that need to be deployed.
The following processes model can be used to deploy a webapp process and an nginx reverse proxy instance that connects to it:
{ pkgs ? import <nixpkgs> { inherit system; } , system ? builtins.currentSystem , stateDir ? "/var" , runtimeDir ? "${stateDir}/run" , logDir ? "${stateDir}/log" , cacheDir ? "${stateDir}/cache" , tmpDir ? (if stateDir == "/var" then "/tmp" else "${stateDir}/tmp") , forceDisableUserChange ? false , processManager }: let constructors = import ./constructors.nix { inherit pkgs stateDir runtimeDir logDir tmpDir; inherit forceDisableUserChange processManager; }; in rec { webapp = rec { port = 5000; dnsName = "webapp.local"; pkg = constructors.webapp { inherit port; }; }; nginx = rec { port = 8080; pkg = constructors.nginxReverseProxyHostBased { webapps = [ webapp ]; inherit port; } {}; }; }
With the following command-line instruction, we can automatically create a scan directory and start s6-svscan:
$ nixproc-s6-svscan --state-dir $HOME/var
The --state-dir causes the scan directory to be created in the user's home directory making unprivileged deployments possible.
With the following command, we can deploy the entire system, that will get supervised by the s6-svscan service that we just started:
$ nixproc-s6-rc-switch --state-dir $HOME/var \ --force-disable-user-change processes.nix
The --force-disable-user-change parameter prevents the deployment system from creating users and groups and changing user privileges, allowing the deployment as an unprivileged user to succeed.
The result is a running system that allows us to connect to the webapp service via the Nginx reverse proxy:
$ curl -H 'Host: webapp.local' http://localhost:8080 <!DOCTYPE html> <html> <head> <title>Simple test webapp</title> </head> <body> Simple test webapp listening on port: 5000 </body> </html>
Constructing multi-process Docker images supervised by s6
Another feature of the Nix process management framework is constructing multi-process Docker images in which multiple process instances are supervised by a process manager of choice.
s6 can also be used as a supervisor in a container. To accomplish this, we can use s6-linux-init as an entry point.
The following attribute generates a skeleton configuration directory:
let skelDir = pkgs.stdenv.mkDerivation { name = "s6-skel-dir"; buildCommand = '' mkdir -p $out cd $out cat > rc.init <<EOF #! ${pkgs.stdenv.shell} -e rl="\$1" shift # Stage 1 s6-rc-init -c /etc/s6/rc/compiled /run/service # Stage 2 s6-rc -v2 -up change default EOF chmod 755 rc.init cat > rc.shutdown <<EOF #! ${pkgs.stdenv.shell} -e exec s6-rc -v2 -bDa change EOF chmod 755 rc.shutdown cat > rc.shutdown.final <<EOF #! ${pkgs.stdenv.shell} -e # Empty EOF chmod 755 rc.shutdown.final ''; };
The skeleton directory generated by the above sub expression contains three configuration files:
- rc.init is the script that the init system starts, right after starting the supervisor: s6-svscan. It is responsible for initializing the s6-rc system and starting all services in the default bundle.
- rc.shutdown script is executed on shutdown and stops all previously started services by s6-rc.
- rc.shutdown.final runs at the very end of the shutdown procedure, after all processes have been killed and all file systems have been unmounted. In the above expression, it does nothing.
In the initialization process of the image (the runAsRoot parameter of dockerTools.buildImage), we need to execute a number of dynamic initialization steps.
First, we must initialize s6-linux-init to read its configuration files from /etc/s6/current using the skeleton directory (that we have configured in the sub expression shown earlier) as its initial contents (the -f parameter) and run the init system in container mode (the -C parameter):
mkdir -p /etc/s6 s6-linux-init-maker -c /etc/s6/current -p /bin -m 0022 -f ${skelDir} -N -C -B /etc/s6/current mv /etc/s6/current/bin/* /bin rmdir etc/s6/current/bin
s6-linux-init-maker generates an /bin/init script, that we can use as the container's entry point.
I want the logging services to run as an unprivileged user (s6-log) requiring me to create the user and corresponding group first:
groupadd -g 2 s6-log useradd -u 2 -d /dev/null -g s6-log s6-log
We must also compile a database from the s6-rc configuration files, by running the following command-line instructions:
mkdir -p /etc/s6/rc s6-rc-compile /etc/s6/rc/compiled ${profile}/etc/s6/sv
As can be seen in the rc.init script that we have generated earlier, the compiled database: /etc/s6/rc/compiled is propagated to s6-rc-init as a command-line parameter.
With the following Nix expression, we can build an s6-rc managed multi-process Docker image that deploys all the process instances in the processes model that we have written earlier:
let pkgs = import <nixpkgs> {}; createMultiProcessImage = import ../../nixproc/create-multi-process-image/create-multi-process-image-universal.nix { inherit pkgs system; inherit (pkgs) dockerTools stdenv; }; in createMultiProcessImage { name = "multiprocess"; tag = "test"; exprFile = ./processes.nix; stateDir = "/var"; processManager = "s6-rc"; }
With the following command, we can build the image:
$ nix-build
and load the image into Docker with the following command:
$ docker load -i result
Discussion
With the addition of the s6-rc backend in the Nix process management framework, we have a modern alternative to systemd at our disposal.
We can easily let services be managed by s6-rc using the same agnostic high-level deployment configurations that can also be used to target other process management backends, including systemd.
What I particularly like about the s6 tool ecosystem (and this also applies in some extent to its ancestor: daemontools and cousin project: runit) is the idea to construct the entire system's initialization process and its sub concerns (process supervision, logging and service management) from separate tools, each having clear/fixed scopes.
This kind of design reminds me of microkernels -- in a microkernel design, the kernel is basically split into multiple collaborating processes each having their own responsibilities (e.g. file systems, drivers).
The microkernel is the only process that has full access to the system and typically only has very few responsibilities (e.g. memory management, task scheduling, interrupt handling).
When a process crashes, such as a driver, this failure should not tear the entire system down. Systems can even recover from problems, by restarting crashed processes.
Furthermore, these non-kernel processes typically have very few privileges. If a process' security gets compromised (such as a leaky driver), the system as a whole will not be affected.
Aside from a number of functional differences compared to systemd, there are also some non-functional differences as well.
systemd can only be used on Linux using glibc as the system's libc, s6 can also be used on different operating systems (e.g. the BSDs) with different libc implementations, such as musl.
Moreover, the supervisor service (s6-svscan) can also be used as a user-level supervisor that does not need to run as PID 1. Although systemd supports user sessions (allowing service deployments from unprivileged users), it still has the requirement to have systemd as an init system that needs to run as the system's PID 1.
Improvement suggestions
Although the s6 ecosystem provides useful tools and has all kinds of powerful features, I also have a number of improvement suggestions. They are mostly usability related:
- I have noticed that the command-line tools have very brief help pages -- they only enumerate the available options, but they do not provide any additional information explaining what these options do.
I have also noticed that there are no official manpages, but there is a third-party initiative that seems to provide them.
The "official" source of reference are the HTML pages. For me personally, it is not always convenient to access HTML pages on limited machines with no Internet connection and/or only terminal access. - Although each individual tool is well documented (albeit in HTML), I was having quite a few difficulties figuring out how to use them together -- because every tool has a very specific purpose, you typically need to combine them in interesting ways to do something meaningful.
For example, I could not find any clear documentation on skarnet describing typical combined usage scenarios, such as how to use s6-rc on a conventional Linux distribution that already has a different service management solution.
Fortunately, I discovered a Linux distribution that turned out to be immensely helpful: Artix Linux. Artix Linux provides s6 as one of its supported process management solutions. I ended up installing Artix Linux in a virtual machine and reading their documentation.
This kind of unclarity seems to be somewhat analogous to common criticisms of microkernels: one of Linus Torvalds' criticisms is that in microkernel designs, the pieces are simplified, but the coordination of the entire system is more difficult. - Updating existing service configurations is difficult and cumbersome. Each time I want to change something (e.g. adding a new service), then I need to compile a new database, make sure that the newly compiled database co-exists with the previous database, and then run s6-rc-update.
It is very easy to make mistakes. For example, I ended up overwriting the previous database several times. When this happens, the upgrade process gets stuck.
systemd, on the other hand, allows you to put a new service configuration file in the configuration directory, such as: /etc/systemd/system. We can conveniently reload the configuration with a single command-line instruction:
$ systemctl daemon-reload
I believe that the updating process can still be somewhat simplified in s6-rc. Fortunately, I have managed to hide that complexity in the nixproc-s6-rc-deploy tool. - It was also difficult to find out all the available configuration properties for s6-rc services -- I ended up looking at the examples and studying the documentation pages for s6-rc-compile, s6-supervise and service directories.
I think that it could be very helpful to write a dedicated documentation page that describes all configurable properties of s6-rc services. - I believe it is also very common that for each longrun service (with a -srv suffix), that you want a companion logging service (with a -log suffix).
As a matter of fact, I can hardly think of a situation in which you do not want this. Maybe it helps to introduce a convenience property to automatically facilitate the generation of log companion services.
Availability
The s6-rc backend described in this blog post is part of the current development version of the Nix process management framework, that is still under heavy development.
The framework can be obtained from my GitHub page.
No comments:
Post a Comment