Monday, January 6, 2020

Writing a well-behaving daemon in the C programming language

Slightly over one month ago, I wrote a blog post about a new experimental Nix-based process management framework that I have been developing. For this framework, I need to experiment with processes that run in the foreground (i.e. they block the shell of the user that invokes it as long as it is running), and daemons -- processes that run in the background and are not directly controlled by the user.

Daemons are (still) a common practice in the UNIX world (although this is changing nowadays with process managers, such as systemd and launchd) to make system services available to end users, such as web servers, the secure shell, and FTP.

To make experimentation more convenient, I wanted to write a very simple service that can run both in the foreground and as a daemon. Initially, I thought writing a daemon would be straight forward, but this turned out to be much more difficult than I initially anticipated.

I have learned that daemonizing a process is quite simple, but writing a well-behaving daemon is quite complicated. I have been studying a number of sources on how to properly write one and none of them provided me all the information that I needed. As a result, I have decided to do some investigation myself and write a blog post about my findings.

The basics

As I have stated earlier, the basics of writing a daemon in the C programming language are simple. For example, I can write a very trivial service whose only purpose is to print: Hello on the terminal every second until it receives a terminate or interrupt signal:

#include <stdio.h>
#include <unistd.h>
#include <signal.h>

volatile int terminate = FALSE;

static void handle_termination(int signum)
    terminate = TRUE;

static void init_service(void)
    signal(SIGINT, handle_termination);
    signal(SIGTERM, handle_termination);

static void run_main_loop(void)
        fprintf(stderr, "Hello!\n");

The following trivial main method allows us to let the service run in "foreground mode":

int main()
    return 0; 

The above main method initializes the service (that configures the signal handlers) and invokes the main loop (as defined in the previous code example). The main loop keeps running until it receives a terminate (SIGTERM) or interrupt (SIGINT) signal that unblocks the main loop.

When we run the above program in a shell session, we should observe:

$ ./simpleservice

We will see that the service prints Hello! every second until it gets terminated. Moreover, we will notice that the shell is blocked from receiving user input until we terminate the process. Furthermore, if we terminate the shell (for example by sending it a TERM signal from another shell session), the service gets terminated as well.

We can easily change the main method, shown earlier, to turn our trivial service (that runs in foreground mode) into a daemon:

#include <sys/types.h>
#include <unistd.h>
#include <stdio.h>

int main()
    pid_t pid = fork();

    if(pid == -1)
        fprintf(stderr, "Can't fork daemon process!\n");
        return 1;
    else if(pid == 0)

    return 0;

The above code forks a child process, the child process executes the main loop, and the parent process terminates immediately.

When running the above program on the terminal, we should see that the ./simpleservice command returns almost immediately and a daemon process keeps running in the background. Stopping our shell session (e.g. with the exit command or killing it by sending a TERM signal to it), does not cause the daemon process to be stopped.

This behaviour can be easily explained -- because the shell only waits for the completion of the process that it invokes (the parent process), it will no longer block indefinitely, because it terminates directly after forking the child process.

The daemon process keeps running (even if we end our shell session), because it gets orphaned from the parent and adopted by the process that runs at PID 1 -- the init system.

Writing a well-behaving daemon

The above code fragments may probably look very trivial. Is this really sufficient to create a daemon? You could probably already guess that the answer is: no.

To learn more about properly writing a daemon, I studied various sources. The first source I consulted was the Linux Daemon HOWTO, but that document turned out to be a bit outdated (to be precise: it was last updated in 2004). This document basically shows how to implement a very minimalistic version of a well-behaving daemon. It does much more than just forking a child process, for reasons that I will explain later in this blog post.

After some additional searching, I stumbled on systemd's recommendations for writing a traditional SysV daemon (this information can also be found by opening the following manual page: man 7 daemon). systemd's daemon manual page specifies even more steps. Contrary to the Linux Daemon HOWTO, it does not provide any code examples.

Despite the fact that the HOWTO implements extra requirements than just a simple fork, it still looked quite simple. Implementing all systemd recommendations, however, turned out to be much more complicated than I expected.

It also made me realize: why is all this stuff needed? None of the sources that I studied so far, explain me why all these additional steps need to be implemented.

After some thinking, I believe I understand why: a well-behaving daemon needs to be fully detached from user control, be controllable from an external program, and act safely and predictably.

In the following sections I will explain what I believe is the rationale for each step described in the systemd daemon manual page. Moreover, I will describe the means that I used to implement each requirement:

Closing all file descriptors, except the standard ones: stdin, stdout, stderr

Closing all, but the standard file descriptors, is a good practice, because the daemon process inherits all open files from the calling process (e.g. the shell session from which the daemon is invoked).

Not closing any additional open file descriptors may cause the file descriptors to remain open for an indefinite amount of time, making it impossible to cleanly unmount the partition where these files may have been stored. Moreover, it also keeps file descriptors unnecessarily allocated.

The daemon manual page describes two strategies to implement closing these non-standard file descriptors. On Linux, it is possible to iterate over the content of the: /proc/self/fd file. A portable, but less efficient, way is to iterate from file descriptor 3 to the value returned by getrlimit for RLIMIT_NOFILE.

I ended up implementing this step with the following function:

#include <sys/time.h>
#include <sys/resource.h>

static int close_non_standard_file_descriptors(void)
    unsigned int i;

    struct rlimit rlim;
    int num_of_fds = getrlimit(RLIMIT_NOFILE, &rlim);

    if(num_of_fds == -1)
        return FALSE;

    for(i = 3; i < num_of_fds; i++)

    return TRUE;

Resetting all signal handlers to their defaults

Similar to file descriptors, the daemon process also inherits the signal handler configuration of the caller process. If signal handlers have been altered, then the daemon process may behave in a non-standard and unpredictable way.

For example, the TERM signal handler could have been overridden so that the daemon no longer cleanly shuts down when it receives a TERM signal. As a countermeasure, the signal handlers must be reset to their default behaviour.

The systemd daemon manual page suggests to iterate over all signals up to the limit of _NSIG and resetting them to SIG_DFL.

I did some investigation and it seems that this method is not standardized by e.g. POSIX -- _NSIG is a constant that glibc defines and it is not a guarantee that other libc implementations will provide the same constant.

I ended up implementing the following function:

#include <signal.h>

static int reset_signal_handlers_to_default(void)
#if defined _NSIG
    unsigned int i;

    for(i = 1; i < _NSIG; i++)
         if(i != SIGKILL && i != SIGSTOP)
             signal(i, SIG_DFL);
    return TRUE;

The above implementation iterates from the first signal handler, up until the maximum signal handler. It will ignore SIGKILL and SIGSTOP because they cannot be overridden.

Unfortunately, this implementation will not work with libc implementations that lack the _NSIG constant. I am really curious if somebody could suggest me a standards compliant way to reset all signal handlers.

Resetting the signal mask

It is also possible to completely block certain signals by adjusting the signal mask. The signal mask also gets inherited by the daemon from the calling process. To make a daemon act predictably, e.g. it should do a proper shutdown when it receives the TERM signal, it would be a good thing to reset the signal mask to the default configuration.

I ended up implementing this requirement with the following function:

static int clear_signal_mask(void)
    sigset_t set;

    return((sigemptyset(&set) == 0)
      && (sigprocmask(SIG_SETMASK, &set, NULL) == 0));

Sanitizing the environment block

Another property that a daemon process inherits from the caller are the environment variables. Some environment variables might negatively affect the behaviour of the daemon. Furthermore, environment variables may also contain privacy-sensitive information that could get exposed if the security of a daemon gets compromised.

As a counter-measure, it would be good to sanitize the environment block. For example, by removing environment variables with clearenv() or using a white listing approach.

For my trivial example case, I did not need to sanitize the environment block because no environment variables are used.

Forking a background process

After closing all non-standard file descriptors, and resetting the signal handlers to their default behaviour, we can fork a background process. The primary reason to fork a background process, as explained earlier, is to get it orphaned from the parent so that it gets adopted by PID 1, the init system, and stays in the background.

We must actually fork twice, as I will explain later. First, I will fork a child process that I will call a helper process. The helper process will do some more housekeeping work and forks another child process, that will become our daemon process.

Detaching from the terminal

The child process is still attached to the terminal of the caller process, and can still read input from the terminal and send output to the terminal. To completely detach it from the terminal (and any user interaction), we must adjust the session ID:

if(setsid() == -1)
    /* Do some error handling */

and then we must fork again, so that the daemon can never re-acquire a terminal again. The second fork will create the real daemon process. The helper process should terminate so that the newly created daemon process gets adopted by the init system (that runs on PID 1):

  run_main_loop) == -1)
    /* Do some error handling */

 * Exit the helper process,
 * so that the daemon process gets adopted by PID 1

Connecting /dev/null to standard input, output and error in the daemon process

Since we have detached from the terminal, we should connect /dev/null to the standard file descriptors in the daemon process, because these file descriptors are still connected to the terminal from which we have detached.

I implemented this requirement with the following function:

static int attach_standard_file_descriptors_to_null(void)
    int null_fd_read, null_fd_write;

    return(((null_fd_read = open(NULL_DEV_FILE, O_RDONLY)) != -1)
      && (dup2(null_fd_read, STDIN_FILENO) != -1)
      && ((null_fd_write = open(NULL_DEV_FILE, O_WRONLY)) != -1)
      && (dup2(null_fd_write, STDOUT_FILENO) != -1)
      && (dup2(null_fd_write, STDERR_FILENO) != -1));

Resetting the umask to 0 in the daemon process

The umask (a setting that globally alters file permissions of newly created files) may have been adjusted by the calling process, causing directories and files created by the daemon to have unpredictable file permissions.

As a countermeasure, we should reset the umask to 0 with the following function call:


Changing current working directory to / in the daemon process

The daemon process also inherits the current working directory of the caller process. It may happen that the current working directory refers to an external drive or partition. As a result, it can no longer be cleanly unmounted while the daemon is running.

To prevent this from happening, we should change the current working directory to the root folder, because that is the only partition that is guaranteed to stay mounted while the system is running:

if(chdir("/") == -1)
    /* Do some error handling */

Creating a PID file in the daemon process

Because a program that daemonizes forks another process, and terminates immediately, there is no way for the caller (e.g. the shell) to know what the process ID (PID) of the daemon process is. The caller can only know the PID of the parent process, that terminates right after setting up the daemon.

A common practice to know the PID of the daemon process is to write a PID file that contains its process ID (PID). A PID file can be used to reliably terminate service, when it is no longer needed.

According to the systemd recommendations, a PID file must be created in a race free fashion, e.g. when a daemon has been started already it should not attempt to create another PID file with the same name.

I ended up implementing this requirement as follows:

static int create_pid_file(const char *pid_file)
    pid_t my_pid = getpid();
    char my_pid_str[10];
    int fd;

    sprintf(my_pid_str, "%d", my_pid);

    if((fd = open(pid_file, O_CREAT | O_EXCL | O_WRONLY, S_IRUSR | S_IWUSR)) == -1)
        return FALSE;

    if(write(fd, my_pid_str, strlen(my_pid_str)) == -1)
        return FALSE;


    return TRUE;

In the above implementation, the O_EXCL flag makes sure that the a previously generated PID file cannot already exist or belong to another process. If a PID file happens to exist already, the initialization of the daemon fails.

Dropping privileges in the daemon process, if applicable

Since daemons are typically long running, and they are typically started by the super user (root), they are also typically a security risk. By default, if a process is started as root, the daemon process also has root privileges and full access to the entire filesystem, if its security gets compromised.

For this reason, it is typically a good idea to drop privileges in the daemon process. There are a variety of restrictions you can impose, such as changing the ownership of the process to an unprivileged user:

if(setgid(100) == 0 && setuid(1000) == 0)
    /* Execute some code with restrictive user permissions */
    fprintf(stderr, "Cannot change user permissions!\n");

In my trivial example case, I had no such requirement.

Notifying the parent process when the initialization of the daemon is complete

Another practical problem you may run into with daemons is that you do not know (for sure) when they are ready to be used. Because the parent process terminates immediately and delegates most of the work, including the initialization steps, to the daemon process (that runs in the background), you may already attempt to use it before the initialization is done. If you rely on a network connection, then it may happen that right after starting the daemon, the network link does not work.

Furthermore, there is no way to know for sure how long it would take before all the daemon's services become available. This particularly inconvenient for scripting.

For me personally, notification was the most complicated requirement to implement.

systemd's daemon manual page suggests to use an unnamed pipe. I ended up with an implementation that looks as follows:

Before doing any forking, I will create a pipe, and pass the corresponding file descriptors the to utility function that creates the helper process, as described earlier:

int pipefd[2];

if(pipe(pipefd) == -1)
    if(fork_helper_process(pipefd, pid_file, data, initialize_daemon, run_main_loop) == -1)
         /* Wait for notification from the parent */

The helper and daemon process will use the write end of the pipe to send notification messages. I ended up using it as follows:

static pid_t fork_helper_process(int pipefd[2],
  const char *pid_file,
  void *data,
  int (*initialize_daemon) (void *data),
  int (*run_main_loop) (void *data))
    pid_t pid = fork();

    if(pid == 0)
        close(pipefd[0]); /* Close unneeded read-end */

        if(setsid() == -1)
            notify_parent_process(pipefd[1], STATUS_CANNOT_SET_SID);

        /* Fork again, so that the terminal can not be acquired again */
        if(fork_daemon_process(pipefd[1], pid_file, data, initialize_daemon, run_main_loop) == -1)
            notify_parent_process(pipefd[1], STATUS_CANNOT_FORK_DAEMON_PROCESS);

        exit(0); /* Exit the helper process, so that the daemon process gets adopted by PID 1 */

    return pid;

If something fails or the entire initialization process finishes successfully completes, the helper and daemon processes invoke the notify_parent_process() function to send a message over the write end of the pipe to notify the parent. In case of an error, the helper or daemon process also terminates with the same exit status.

I implemented the notification function as follows:

static void notify_parent_process(int writefd, DaemonStatus message)
    char byte = (char)message;
    while(write(writefd, &byte, 1) == 0);

The above function simply sends a message (of only one byte in size) over the pipe and then closes the connection. The possible messages are encoded in the following enumeration:

typedef enum
    STATUS_INIT_SUCCESS                  = 0x0,
    STATUS_CANNOT_CHDIR                  = 0x2,
    STATUS_CANNOT_SET_SID                = 0xc,

The parent process will not terminate immediately, but waits for a notification message from the helper or daemon processes:

DaemonStatus exit_status;

close(pipefd[1]); /* Close unneeded write end */
exit_status = wait_for_notification_message(pipefd[0]);
return exit_status;

When the parent receives a notification message, it will simply propagate the value as an exit status (which is 0 if everything succeeds, and non-zero when the process fails somewhere). The non-zero exit status corresponds to a value in the enumeration (shown earlier) allowing us to trace the origins of the error.

The function that waits for the the notification of the daemon process is implemented as follows:

static DaemonStatus wait_for_notification_message(int readfd)
    char buf[BUFFER_SIZE];
    ssize_t bytes_read = read(readfd, buf, 1);

    if(bytes_read == -1)
    else if(bytes_read == 0)
        return buf[0];

The above method reads from the pipe, will block as long as no data was sent and the write end of the pipe was not closed, and returns the byte that it has received.

Exiting the parent process after the daemon initialization is done

This requirement overlaps with the previous requirement and can be met by calling exit() after the notification message was sent (and/or the write end of the pipe was closed).


I really did not expect that writing a well-behaving daemon (that follows systemd's recommendations) would be so difficult. I ended up writing 206 LOC to implement all the functionality listed above. Maybe I could reduce this amount a bit with some clever programming tricks, but my objective was to keep the code clear, have it decomposed into functions and make it understandable.

There are solutions that alleviate the burden of creating a daemon. A prominent example would be BSD's daemon() function (that is also included with glibc). It is a single function call that can be used to automatically daemonize a process. Unfortunately, it does not seem to meet all requirements that systemd specifies.

I also looked at many Stackoverflow posts, and although they correctly cite the systemd's daemon manual page with requirements for a well-behaving daemon, none of the solutions that I could find fully meet all requirements -- in particular, I could not find any good examples that implement a protocol that notify the parent process when the daemon process was successfully initialized.

Because none of these Stackoverflow posts provide what I need, I have decided to not use any of these articles as an example, but start from scratch and look for all relevant pieces myself.

One aspect that still puzzles me is how to "properly" iterate over all signal handlers. The solution hinted by systemd is non-standard requiring a glibc specific constant. Some sources say that there is no standardized equivalent, so I am still curious whether there is a recipe that can reset all signal handlers to their default behaviour in a standard compliant way.

In the introduction section, I mentioned that daemons are still a common practice in UNIX-like systems, such as Linux, but that this is changing. IMO this is for a good reason -- services typically need to reimplement the same kind of functionality over and over again. Furthermore, I have noticed that not all daemons meet all requirements and could behave incorrectly. For example, it is not a guarantee that a daemon correctly writes a PID file with the PID of the daemon process.

For these reasons, systemd's daemon manual page also describes "new style daemons", that are considerably easier to implement with less boilerplate code. Apple has similar recommendations for launchd.

With "new style daemons", processes just spawn in foreground mode, and the process manager (e.g. systemd, launchd or supervisord) takes care of all "housekeeping tasks" -- the process manager makes sure that it runs in the background, drops user privileges etc.

Furthermore, because the process manager directly invokes the daemon process (and as a result knows its PID), controlling a daemon is also less fragile -- the requirement that a PID file needs to be properly created is also dropped.


The daemonize infrastructure described in this blog post is used by the example webapp that can be found in my experimental Nix process framework repository.

1 comment: