Tuesday, April 14, 2026

Building customized Linux distributions


Last year, I did a number of interesting retro-computing projects, such as running Linux and NetBSD on my Amiga 4000, and building a retro PC to run interesting applications from the late 90s, early 2000s. To run these kinds of applications, I deployed four kinds of classic PC operating system installations: Windows 98 SE, Windows XP, Slackware Linux 8.0 and MS-DOS 6.22.

It was fun to experiment with these classic operating systems. Already from a very young age, I have enjoyed exploring different kinds of operating systems, including their underlying concepts and capabilities.

One of the challenges that I ran into is producing a Slackware 8.0 Linux installation that covers all my needs. Already in 2001 (when this Linux distribution was still mainstream), I faced customization issues with all kinds of Linux distributions and found myself spending a considerable amount of time extending and tuning Linux-based systems.

I ran into the same kinds of challenges for my retro computing experiments. Similar to 2001, due to the amount of customization work, I became motivated to construct a Linux installation from scratch that fully implements my desired configuration for late 90s, early 00s experiences.

To make the work easier, I have decided to dig up old knowledge and old solutions. I could re-invent the wheel, but I consider it far more efficient to dig up my old personal Linux from Scratch-related projects that I have actively been developing between 2001-2009.

The first step in that process is refurbishing my custom developed automation tool.

In this blog post, I will explain the background of my customization work and show my automation tool.

Background


There are a number of reasons why no existing Linux distribution fully captured all my needs and I became motivated to heavily customize Linux distributions and even build my own distribution from scratch.

The modularity of Linux systems


One important reason is caused by the fact that Linux systems are modular and assembled from a variety of components maintained by various kinds of developers. Sometimes distributions make choices that I do not always prefer.

Contrary to the other operating systems that I mentioned in the introduction, Linux is, technically speaking, not a full operating system, but an operating system kernel. The kernel is the process that manages hardware resources and provides an interface to services (such as process management, memory management and filesystems) that applications can use. The kernel does not do anything directly for the user -- Linux needs to be combined with various components to become a useful system.

Linux is often complemented with tools from the GNU project, which purpose is to build a free (libre) UNIX-like operating system. The GNU project argues that for this reason, most Linux-based systems should be called GNU/Linux systems.

The reason why Linux is strongly connected to the GNU project is because Linus Torvalds from its beginning (in 1991) uses components from the GNU project, such the GNU C Compiler (GCC) and GNU assembler to compile the Linux kernel. Moreover, he complemented the first release of the Linux kernel with the GNU Bash shell to produce a usable system.

Furthermore, in 1991 a working kernel was one of the pieces still missing in the GNU project to produce a completely free UNIX-like system (GNU's own kernel: GNU Hurd was already in development before 1991, but not quite usable at the time Linux was first released).

In addition to the GNU project, there are many packages used from different kinds of projects, such as the X Window System, and the KDE and GNOME desktops.

There are various parties distributing pre-assembled Linux systems called Linux distributions. Already since the 90s, there have been many kinds of Linux distributions. Furthermore, there is no single accepted mainstream Linux distribution.

Although many conventional Linux distributions share many kinds of packages in addition to the Linux kernel (e.g. GNU packages, the X Window System and other kinds of utilities), there are also numerous of differences between them. For example, distributions may differ in the selected versions/variants of packages, the used package manager, complementary tools, the default desktop, and the amount of customizations for specific user groups.

Some Linux distributions are tailored towards the needs of (non-technical) end-users and have a substantial amount of end-user software packages included, including games and productivity software, which is really convenient.

There are also Linux distributions that use a completely different set of complementary components rather than the commonly used ones. A prominent example is Android, a commonly used mobile operating system.

Android uses the Linux kernel, but most of the remaining operating system components are custom developed and substantially different than conventional Linux distributions. For example, the Android project does not use, apart from the GNU build toolchain, any GNU utilities, GNU C library or the X Window System.

The distribution of Linux applications


Already since the mid 90s, most Linux distributions include package managers for handling the life-cycle of software packages by implementing activities such as installing, upgrading and removing packages.

Package managers make the installation of packages quite convenient -- with a single command, you can easily install packages from a distribution's package repository, including all required dependencies. Some distributions have nice graphical front-ends for their package managers.

Despite the fact that package managers are powerful and convenient, application deployment is not a fully solved problem for non-technical end-users. For example, quite often it is not straight forward to deploy external software packages that are not part of a Linux distribution.

I ran into the same kinds problems already in 1999. Although Slackware is convenient and covers many of my needs, I wanted to extend my Slackware 8.0 installation with quite a few non-distribution packages: development tools, games, and random utilities. I also had to tweak various configuration aspects to make certain applications more convenient to use.

Very few third-party software developers provide pre-built binary packages for Linux distributions. Despite the fact that many Linux distributions offer similar kinds of packages (e.g. the Linux kernel, GNU tools and libraries), they are not fully binary compatible with each other (for example, the versions of libraries in each distribution may be slightly different, the locations where these dependencies can be found may be slightly different). This is a problem that most third-party developers find inconvenient or impossible to solve.

There are some external projects that do offer pre-built binary packages (albeit for a limited set of Linux distributions only), but this is the exception rather than the rule.

It is a far more common practice that Linux application developers distribute their software in source code form only (e.g. a tarball with source code and build scripts). The responsibility of providing pre-built binary packages is delegated to the package maintainers of a Linux distribution or the users themselves.

It is a bit unfortunate that Slackware has never been a target for many third-party software developers, so the only way to go forward is to compile packages from source code myself.

The fact that you can compile software packages from source code is IMO a double edged sword -- I consider it a gift (for technical users, such as myself) that you can study and modify software packages, but also a burden, in particular for non-technical end-users because of the amount of technical details that you may get exposed to.

Fortunately, many packages can be built from source code in a straight forward way, such as:

./configure
make
make install

The above procedure often suffices to deploy packages in source code form that use GNU Autotools to manage its build infrastructure, which is the most common solution used for source packages -- the configure script checks the configuration of the system and provides configuration settings for the build procedure, make builds the project from source code, and make install installs the package on the target machine.

However, despite the fact that the procedure shown above is mostly straight forward, there are various kinds of problems that you may run into, such as:

  • Missing dependencies. Packages are rarely self contained -- they rely on the presence of existing packages on the system. For example, you need a number of tools to build C/C++ projects (e.g. GNU binutils, GCC, GNU Make).

    To build an application that integrates with the GNOME desktop, you need the GTK+ library to present.

    If a dependency is missing, the build procedure will fail. Some packages, may also have dependencies on packages that are not present in the Linux distribution's repository. Then these dependencies must be compiled from source code first.
  • There may be all kinds of incompatibilities causing a build to fail. For example, the project may use features in the C programming language that only a newer gcc compiler supports. Required libraries may be present, but they may be the wrong versions (for example, they may be too old).
  • Some additional configuration aspects need to be solved. For example, for my Slackware 8.0 installation I deployed a movie player: MPlayer from scratch. The package does not include a .desktop file making it impossible to conveniently launch the program from the KDE and GNOME application launchers. I had to create these configuration files myself.
  • The lifecycle of an external package is not managed by the distribution's package manager. If you run make install the package gets installed, but at some point you may want to remove or upgrade it. Although some projects may also offer a make uninstall target, this is not a guarantee.

    Furthermore, if you remove the source code tree after compiling it, uninstalling can no longer be done automatically. There are ways to properly package something for your distribution's package manager, but these are often not trivial.
  • It may take quite a bit of time for some projects to get build. I still remember that compiling OpenOffice.org in 2002 took me roughly 6 hours.

Nowadays, Linux is quite successful on servers, mobile devices (through Android) and embedded devices, but on the desktop it is a different story, even today.

According to Linus Torvalds, the fact that there is not a strong degree of lasting binary compatibility heavily contributed to the fact that it never reached a big audience on the desktop. It makes it difficult for software developers to ship software for Linux and because relevant external applications are hard to obtain in binary form it is difficult for users to deploy them.

To cope with this problem, the Linux kernel development community adopted a policy stating that they "do not break the userspace". If applications rely on the kernel in a certain way and this has become an established practice, the kernel developers consider its breakage a kernel bug that needs to be fixed.

Although the Linux kernel has adopted a compatibility policy, other critical components did not. For example, the GNU C Library (a.k.a. glibc, the library collection that almost all packages directly or indirectly rely on) had incompatible API changes in the past. The same applies to the standard C++ library in GCC.

Despite the fact that compatibility is still a problem today, work has been done to improve the situation:

  • The Linux Standards Base (LSB) was an effort to provide a standardized sub set of a system to ensure binary compatibility amongst Linux distributions, such as libraries, tools and a file system organization. It defines the RPM package manager to be the standard package management solution. Despite the effort, there were not many distributions willing to adopt it (most likely due to some controversial choices, such as RPM), with the exception of some distributions targeting large companies, such as RedHat Enterprise Linux. The effort was stopped in 2015.
  • Some distributions have much larger package repositories compared to 20 years ago, making it more likely for end-users to obtain the software they want. For example, Debian and Nixpkgs (the package set that NixOS uses) have a substantial amount of packages in their repositories (i.e. thousands) maintained by a large number of contributors.
  • There are universal deployment solutions available that work across multiple Linux distributions. The most prominent example of such a solution is IMO: Docker, that gained widespread adoption. Unfortunately, Docker is mostly used for server software and development tools, not for end-user applications.

    For end-user applications, various universal solutions exist. Some prominent examples are: Flatpak, Snappy, Nix, and Guix. In the past, I wrote blog posts about some of these solutions comparing their properties. Despite having a significant userbase, none of these solutions gained widespread acceptance yet.

Shared libraries


Another compelling reason for me to build a Linux system completely from scratch is the compatibility problems that you may run into when you need to update shared libraries.

Using shared libraries is not a feature unique to Linux (or other kinds of UNIX-like systems). Many operating systems have them -- I was already exposed to the idea of shared libraries in AmigaOS. Moreover, Windows also prominently supports shared libraries.

Using shared libraries have a number of benefits:

  • Reuse. Many applications provide the same kinds of functionality. For example, there are quite a few applications that need to work with image formats, such as JPEG, GIF and PNG. Rather than reimplementing this kind of functionality from scratch, developers can utilize shared libraries that provide encoding and decoding functionality, saving precious development time.

    The same idea applies to applications with graphical user interfaces -- developers typically use shared libraries for implementing the GUI elements, such as buttons, text fields and menus, to save development time and to be sure that the application integrates well with the desktop environment.
  • Resource optimization. Another advantage of using shared libraries is that they reduce the size of the executables. When using shared libraries, functionality is not integrated into the executables, but executables refer to external library files that are stored only once on disk. As a result, it helps to considerably reduce the total disk space consumption.

    In Linux, shared libraries are also only stored in RAM once. As long as the memory pages are not changed, this concept also reduces RAM consumption when multiple applications with common functionality run concurrently.
  • More upgrade flexibility. When a bug or security issue was found, users can replace a shared library with a newer version causing all dependent applications to be fixed, without requiring to modify the executables themselves.

In Linux and many other UNIX-like systems, the amount of reuse between applications is heavily optimized and raised to almost a maximum. In a typical Linux installation, there are quite a few shared library files.

Shared libraries also have drawbacks:

  • Applications are no longer self contained. All required shared libraries must be present on the system before running the application. If any of them are missing an application will most likely not work.
  • Compatibility problems after upgrading. Newer versions of shared libraries may not be backwards compatible with older versions. Sometimes backwards incompatibility is unintentional -- for example, an application requires certain odd behaviour of a shared library (e.g. a bug). Such an application can break if an error corrected library has been installed.

I have seen quite a few incompatibilities between versions of libraries. For example, libpng 1.6 is no longer backwards compatible with libpng 1.2. If the default version on your Linux distribution is version 1.2, you cannot replace the existing libpng installation with version 1.6, or applications that require PNG functionality will break.

It is possible to, for example, to install an incompatible version of libpng alongside an existing version, but doing this is not straight forward: you have to instruct the build scripts of packages to work with libraries residing in a different location than the usual (e.g. normally shared libraries are installed in /usr/lib, but you can install a different version in a different prefix, such as: /opt/libpng-1.6.1/lib and instruct build scripts to use this version). Coping with these unconventional installation set ups is definitely not something you want to bother non-technical end-users with.

Some libraries are really hard to configure to co-exist on a single system. For example, glibc (the standard C library) is a library package that almost any package on a Linux system directly or indirectly relies on (because most of the software is written in C or requires a component that is implemented in C).

Upgrading to a new minor version, e.g. from 2.2.4 to 2.2.5, is a compatible upgrade, but upgrading to a new major version (e.g. from 2.2 to 2.3) is not. As a result, it is impossible to replace the previous version. Installing two versions of glibc next to each other and configuring a set of applications to use another version is technically possible, but extremely hard to do by conventional means.

About the modularity aspect of operating systems


Although Linux systems have, due to their origins, a number of drawbacks when it comes to application deployment (including upgrading), I do appreciate its modularity aspects.

IMO modularity is a valuable operating aspect that I believe has become a forgotten property. Nowadays, the most commonly used consumer operating systems (e.g. Windows, Android, macOS and iOS) have very weak modularity properties and their installations are quite substantial in size.

With modularity you can conveniently grow the features of your software installation, or shrink them if needed. If your operating system has a well-defined core that is not too large and flexible enough, you can even shrink the size of an installation so much so that it can run on a single floppy disk. This used to be a common property of a number of operating systems frequently used in the past.

Moreover, IMO modularity also significantly helps in understanding the architecture of an operating system. If it has a relatively simple core, knowing its essentials can give you a compass to base your knowledge on and finding your way. I consider this to be very valuable -- an operating system should be developed to serve its users, not the other way around.

AmigaOS


I was first exposed to the idea of a modular operating system when I was still actively using AmigaOS as my main operating system. I already knew that AmigaOS consisted of multiple parts such as:

  • The kernel is called Exec. It is a microkernel that is responsible for memory allocation, task management, library management, message passing and interrupt handling.
  • The sub system that handles process management, file systems and the command-line interface is called AmigaDOS.
  • The sub system that does graphical window management is called Intuition.
  • Workbench is the graphical desktop environment.

Most of the above operating system components reside in the Kickstart ROM. They are instantly present after powering up the machine.

The earliest Amiga models, the Amiga 1000 and Amiga 500 (the most popular and frequently used Amiga model), were primarily floppy based systems -- these Amiga models do not include any hard drive by default.

As a result, it was a common practice to work with many kinds of bootable disks: when switching on the machine, it shows a splash screen requesting the user to insert a bootable floppy disk.

AmigaOS was flexible enough for people to create custom boot disks and this feature was frequently used. Most of the available Amiga applications were distributed as a bootable floppies. (As a sidenote: most commercial games typically used custom bootblocks bypassing most of the facilities of AmigaOS).

I have also spend a great amount of time creating customized boot disks and toying around with settings including visual settings, such as adjusting the text, background and window colors and modifying the mouse cursor. I have also created boot disks for my custom developed games with AMOS Professional.

Creating a minimal bootdisk in AmigaOS is very straight forward. On the command-line prompt, I can format a disk in the primary disk drive with the following command:

FORMAT DF0:

and make it bootable by installing a bootblock on it:

INSTALL DF0:

Running the above two commands suffice to have a minimal bootdisk. After bootup, it will show a window with a command-line interface.

A minimal bootdisk can then be enhanced with additional features, such as custom applications.

For example, if you want to boot into the Workbench, you need to copy two additional executables from the Workbench disk to the boot disk and store them in the DF0:C directory:

  • LoadWB loads the Workbench desktop session.
  • EndCLI ends the currently active command-line interface session.

If you want the Workbench to be loaded at boot time, I can create the following script: DF0:S/Startup-Sequence to accomplish this:

LoadWB
EndCLI >NIL:

MS-DOS


Around the same time, I also learned that modularity was common on PCs using MS-DOS (or IBM PC-DOS). Although I did not have a PC myself, I had plenty of experience because of friends and relatives, and the fact that I could emulate a PC on my Amiga using the KCS PowerPC board.

In MS-DOS, you can format a floppy disk with the system flag to make it bootable:

FORMAT A: /S

Alternatively, you can transfer the system files from an existing installation (such as the hard-drive) to an already formatted floppy disk with:

SYS C:\ A:

The above commands suffice to have to create a boot disk showing a command-line prompt.

A bootable MS-DOS floppy disk contains three files:

  • COMMAND.COM is the command-line interpreter program.
  • IO.SYS contains the default MS-DOS device drivers and the DOS initialization program.
  • MSDOS.SYS contains the kernel and is responsible for file access and program management.

In the past, I have used bootable MS-DOS floppy disks for a variety of reasons. For example, I have used it to distribute some of my custom made MS-DOS QBASIC programs.

I typically formatted a bootable floppy disk, copied QBASIC.EXE to it along with my programs, and added a menu to select a program to run (sometimes I used a menu manager program or sometimes a custom develop menu using the CHOICE command-line tool).

For example, to automatically boot from a floppy disk into a QBASIC program, I can create the following AUTOEXEC.BAT file:

@ECHO OFF
QBASIC MYPROG.BAS

Windows 95 and later versions


A couple of years after Commodore's demise (late 1996), I made the transition to the PC platform permanently. By this time, Windows 95 became the main recommended operating system for home PCs instead of MS-DOS.

I was happy with all the new hardware capabilities that my new PC provided me (a faster CPU, more RAM, more storage, better graphics, better sound etc) and software capabilities (such as pre-emptive multi-tasking, better filesystem support etc.), but I was terrified of the costs of running Windows 95.

Windows 95 requires several hundreds of megabytes of storage. Back then, the total capacity of my hard drive was only 2 GiB. Furthermore, it had the option to enable and disable certain components, but even in its most basic form it was quite resource demanding.

Also, it was it was as good as impossible to create small/customized installations anymore. It was also much harder to know of which components Windows 95 consisted and how to control them. The time that it takes for Windows 95 to boot was also a bit disturbing to me.

Although Windows 95 still gave users the option to create bootable MS-DOS disks (because essentially Windows 95 was still a Windows + MS-DOS hybrid), the ability to create custom installations was pretty much lost. In later versions of Windows, that functionality became completely unavailable.

Linux


Fortunately, when I actively started to use Linux in 1999, I gradually learned that Linux has the modularity properties that I used to cared about. While studying the available Linux HOWTOs, I discovered the "From Power Up To Bash Prompt" HOWTO that explains how the boot process of a typical Linux system works and the components that are involved.

Building a minimal Linux-based system is relatively straight forward. As explained earlier, the component that is called Linux is only a kernel. Linux manages hardware resources and provides an interface for operating system services to applications.

When the Linux kernel is loaded and its initialization is done, it loads an external program that is called the init process. By default, Linux tries a number of executables, such as: /bin/init and /sbin/init but the init process can also be specified through the init= kernel parameter. From the perspective of the kernel, the init process can be any kind of program.

Because the init process is flexible, you can, for example, create a very minimal Linux-based system that directly starts the GNU Bash shell (for example, by specifying /bin/bash as the init process). After the initialization of the kernel was completed, you will directly see a shell prompt and you have super-user privileges immediately.

Alternatively, you can also, for example, use busybox which is a toolset providing a collection of useful commands/programs embedded in a single executable.

In conventional Linux distributions, the Linux kernel is never directly loaded after powering up a machine -- a bootloader program is responsible for this. There are a variety of bootloaders available for a variety of platforms. Today, GRUB and systemd-boot are the most common solutions on the PC platform. In the past, LILO was frequently used.

(As a sidenote: older versions of Linux can also directly boot from the Master Boot Record (MBR) of any medium without using a bootloader, but this feature was eventually removed).

Conventional Linux distributions typically use an init program that have a number of responsibilities. In the past, a program called sysvinit was commonly used. Nowadays, it is often systemd.

sysvinit is a process supervisor and primitive service manager that loads a number of processes on startup (defined in a configuration file: /etc/inittab). Typically, this inittab starts an initialization script that sets the boot process in motion (such as loading system services) and a number of terminal session managers:

  • Terminal sessions are often managed by the agetty program. Typically a number of them (e.g. six) are started concurrently. With the Alt+F1-F6 key combinations you can switch between them.
  • agetty typically starts a login manager (/bin/login) showing a login prompt.
  • The login manager starts the login shell of the user if a login was successful (in most conventional Linux distributions, the default login shell of a user is /bin/bash).

Similar to AmigaOS and MS-DOS, Linux also used to be flexible enough to form a minimal system that can be used from a boot floppy.

(As a sidenote: that property was eventually lost because floppy drives have become obsolete and the Linux kernel has grown too large to fit on a floppy disk. However, as of today it is still possible to produce Linux-based systems that are relatively small. Tiny Core Linux is an example of Linux distribution that can provide a FLTK/FLWM desktop that is roughly 16 MiB in size)

Limitations of my Slackware 8.0 installation


I have managed to get quite a few applications running on my Slackware 8.0 installation that were not part of the distribution. Furthermore, I have applied various kinds of configuration changes to make my life more convenient, such as adding XDG desktop files so that applications can be started from the KDE program launcher.

Despite my customization efforts, still not all of my Slackware 8.0 installation is completely how I want it. For example, Slackware 8.0 includes the KDE 2.1 desktop. For my retro-computing project, I prefer to use the latest KDE version in the 2.x version range: KDE 2.2.2. Upgrading to the next Slackware release (9.0) is not an option, because that version includes KDE 3.1.

Although it is technically possible to package KDE 2.2.2 for Slackware 8.0, I know that that I will break some existing applications because libraries will get replaced with incompatible versions and I require newer versions of some existing dependencies. I consider this configuration process to be too tedious and time consuming to do on my Slackware installation.

As a result, I have decided to build my own customized Linux distribution from scratch. In my custom distribution, I can pick all the variants of the packages that I want.

The Linux from Scratch book


Building a custom Linux distribution from scratch may sound very ambitious but it is actually a well documented process -- I did not have to re-invent the wheel.

In 2001, I heavily studied the available HOWTOs on the Linux Documentation Project page. In addition to the "From Power Up To Bash Prompt" HOWTO I also discovered the Linux from Scratch HOWTO.

The first HOWTO's purpose is to cover the components involved in the booting process. The scope of Linux from Scratch (LFS) HOWTO goes far beyond the booting process -- its purpose is construct a fairly complete bare bones Linux system using variants of packages that are commonly used in mainstream Linux distributions.

In addition to assembling a fairly complete bare bones system, the Linux from Scratch book has additional constraints:

  • Ensuring correctness. The purpose of the book is to create a Linux system with exactly the versions of the packages that you have selected, built in the way that you want it (e.g. using your selected optional settings). These packages are typically different than your host system's packages. As a result, you cannot copy binaries from the host system to your target system.
  • Building all packages from source code. As explained earlier in this blog post, it is a common habit that Linux packages are distributed in source form as a universal packaging format. For your own custom distribution, you do not have a package manager at your disposal that offers pre-built binaries. As a consequence, building everything from source code is the only way forward.
  • Making sure that your assembled system is self-contained. When building your custom system, you do not want to retain a dependency on anything that resides on the host system from which the builds are done. Otherwise, your custom Linux system is no longer self contained. Unfortunately, for most packages the problem is that if you follow their regular build procedures, they will have dependencies on shared libraries on the host system. You must follow a strategy to break that dependency.
  • Bootstrapping the GNU build toolchain. You will face a chicken-or-the-egg problem with the GNU Compiler Collection (GCC) package, that includes a C compiler. The paradox is that GCC is also written in C. As a result, you first need the host system's gcc compiler to compile it. However, you also want gcc (and additional build tools) to be built in such a way that the resulting binaries are not influenced by the fact how your host system's gcc compiles. To cope with this, the GCC compiler can bootstrap itself -- it will compile itself multiple times, first by using the host system's compiler, then by using an intermediate compiler that compiles the target compiler.

I also learned that the Linux from Scratch book has its own homepage that typically contains a more up-to-date version of the book.

In addition to the book (describing how to construct a basic system), there are a number of interesting sister projects, such as:

  • The Beyond Linux From Scratch (BLFS) book describes how to extend a bare bones LFS system with software packages to make users more productive, such as utilities, productivity software, desktop environments and server software.
  • The hints project contains various kinds of externally contributed documents describing how to extend or modify your LFS system.

Deploying Linux from Scratch installations


I have successfully deployed many kinds of LFS-based installations between 2001 and 2009.

If your goal is to learn about the composition of Linux systems, then I recommend people to faithfully follow the book and manually type in all the shell instructions. In my own experience, manually typing in the commands helps you in processing and understanding the provided information.

However, if your goal is also to regularly use LFS-based installations as a basis for developing custom systems, then I would recommend you to automate to make the process manageable.

Currently, there are several kinds of solutions available that can help you. For example, there is a sub project called Automated Linux from Scratch (ALFS). In this sub project various kinds of automated solutions were developed.

Currently, jhalfs is the preferred implementation for automating the Linux from Scratch book. However, the main purpose of this tool is extract and execute the shell instructions from the book (the Docbook XML code to be precise), not to facilitate custom installations.

Back in 2001, when I first started to experiment with the Linux from Scratch book, there were no automated solutions. As a consequence, I developed my own. My custom solution went through a number of iterations to become what it is now:

The big script


In the very beginning, I used to type in all commands manually. Quite quickly, I realized that this process is quite tedious if you have to repeat it frequently -- some packages take a long time to build, requiring you to wait behind your computer for a long time. For example, gcc and glibc took over an hour to build on my computer in 2001.

Another drawback is that I sometimes make mistakes, e.g. typos or not faithfully following the versions of packages described in the book (e.g. glibc 2.3.1 instead of glibc 2.2.5). The consequence of these mistakes is that I typically had to reformat my LFS partition and start over again.

The first form of automation that I used was quite simple -- rather than typing in the shell instructions in the console, I put them in a giant shell script. After typing in all the commands, I execute the script in one go.

(As a sidenote: to be accurate, my solution consisted of two monolithic scripts. The first script constructed the bootstrap system. After completing the first script, I had to change the root directory to the LFS partition by using the chroot command. In the chroot environment, I ran the second script that executes the remaining steps).

Although my technique was not particularly elegant, using a script was a big help in the construction of my own basic LFS system. Although I could not prevent mistakes, the advantage of using a script is that the process became repeatable. It was no longer required for me to sit behind my computer for hours. When I made a mistake, I could correct the problem in the shell script, discard my broken LFS installation and repeat the process.

Scripts and sequences


I told various kinds of people about my successful LFS experiences including my cousin. He wanted to borrow my scripts to see if he could roll out LFS on his computer. He was not successful due to some differences on his machine compared to mine. Because my script was huge, he told me that it was quite hard to read and modify it.

I took that feedback into account -- to improve readability and maintainability, I splitted my monolithic shell script into small parts.

The Linux from Scratch book is organized in many sections: for example, each package build and each configuration step (e.g. generating a configuration file) is described in its own section giving the reader a pretty good overview of the components that a system consists of and the progress that was made.

I have decided to organize my refactored scripts in a similar way: each package and configuration step was stored in a separate file. A coordinating shell script (that I would eventually call a sequence script) was used to call these individual scripts in the required order.

In addition to better code quality, this new organization also gave me another benefit -- I could now also more easily automate the deployment of the additional packages that I wanted to use on top of it.

Previously, after successfully constructing a bare bones system with my monolithic shell script, I was still compiling additional packages by hand, such as the X Window System, various image decoding libraries and Window Maker -- a window manager providing a desktop experience similar to NeXTSTEP. Although their deployment procedures are not too complicated, it was still time consuming to repeat it.

With my new organization, I ended up automating the construction of all my packages, including the ones that were not part of the Linux from Scratch book.

As a result of having more convenient automation, I was also able to heavily extend the amount of custom packages -- I ended up packaging many utilities, networking software, the entire GNOME and KDE desktops and various end-user applications, such as XMMS, MPlayer and The GIMP.

Making the process interruptable and resumable


Although my refactorings made it possible to automate package builds more easily and get more software packages deployed, I also became terrified of the costs.

In the beginning, the time to deploy the base system was considerably longer than the custom packages collection, but that gradually changed. After adding desktops (such as GNOME and KDE) to my packages collection, deploying the custom package collection took significantly longer than the base system. When I added the Mozilla Application Suite, another five hours of build time was added.

As a consequence of these additions, the time that it takes to build the custom package sequence was also much more unpredictable compared to the base system.

I used time estimations to plan the right moments to leave my computer behind to automatically build my system. For example, when my system still used to be small I knew that it would take roughly four hours -- I was able to start the build before going to a side job, and when I got back, the construction of my system would be finished.

I ran into the nasty habit of leaving my computer switched on at night to build packages. My computer used to be located in the back of my bed room. This habit was negatively affecting my quality of sleep. :)

Then I came with another feature addition to my build scripts: make the execution of a sequence of scripts interruptable and resumable.

To implement this functionality I had to re-organize my scripts. I developed a tool that did the orchestration -- it processes a white space separated string specifying the scripts to execute. Rather than having scripts that directly execute something, I used functions to encapsulate their executions.

The tool keeps track of the scripts have been executed. I could set a breakpoint after the execution of a specific script in a sequence so that the process would stop. When starting the sequence another time, it skips the scripts that were previously executed and resumes the process where it was previously interrupted. This interrupt feature was immensely helpful to make me sleep better.

Optional execution of scripts


Some time later, I ran into another interesting use case. For a while, I only deployed my custom LFS installation to my desktop machine. Some time later, I wanted to use another, less powerful, PC as a router and file server. For this machine, I also wanted to use my own custom Linux distribution.

Because this second machine had a different kind of purpose, I only wanted to deploy a sub set of packages to it. For example, it was not required to deploy any desktops to it, such as KDE or GNOME.

To make a smaller deployments possible I have added a new feature tool my build tool: optional sequences. In optional sequences only selected scripts are executed. I also developed another tool allowing you to configure the scripts to be executed in a sequence.

Adding simple package management


Earlier in my blog post, I have explained that one of the drawbacks of deploying source packages is that the life-cycle of the package is not managed by the host system's package manager.

After using my custom built Linux distribution for a while, I discovered the tgz.txt hint from the Linux from Scratch hints sub project that provided me a very easy and practical solution for fully managing the life-cycle of a package: using checkinstall in combination with the Slackware package manager.

checkinstall is a tool that you typically run in combination with the make install command.

checkinstall internally uses another tool that is called: installwatch that records file modifications while executing a process. Installwatch preloads a custom library that intercepts a number of library calls that do file modifications.

checkinstall uses the identified files from installwatch of an installation process to automatically create a package for the host system's package manager (Slackware, RPM or Debian). Finally, the package is deployed by using the host system's package manager so that its life-cycle can be managed.

The Slackware package manager is not the most feature-rich or powerful package manager out there. For example, it has no notion of dependencies. Compared to the other two package managers that checkinstall supports (RPM and Debian) it is the most primitive from a feature perspective.

However, despite its limitations it does have an advantage for Linux from Scratch users -- the Slackware package manager only relies on a Bourne compatible shell and common shell utilities, such as tar. As a result, it can be used quite early in the construction process of a system and you do not need many additional software packages beyond the base system.

If you want to use RPM, you need many kinds of dependencies deployed first (such as Berkeley DB, zlib, GNU Privacy Guard etc.) before RPM itself can be built from scratch. If you want to use checkinstall and rpm together, you need to build all these packages twice.

To automatically create Slackware packages for all relevant components, I have developed an abstraction: in each script I need to specify the build and installation procedure. The installation procedure is automatically monitored by checkinstall so that a Slackware package is automatically created.

Abstractions


After adding a primitive package management solution, I did not add any substantial feature changes to my scripts and tools. I did improve the quality of the tools by developing additional abstractions to make the process more convenient. By this time, I already learned much more about shell scripting.

I have noticed that some of the shell commands that I had to execute in my scripts were quite repetitive. For example, when compiling a package I always had to extract the tarball, open the source code directory, apply patches and remove the build directory after completion. In the Linux from Scratch book most of these details are also typically left out in the instructions.

Eventually, these details were abstracted away from my build scripts collection. By using these abstractions, I only had to specify the name of the source tarball and some other meta data, the build procedure and installation procedure for most conventional packages. The remaining steps (such as extracting the tarball, opening the build directory and removing the build directory) were automatically executed by the tool.

Deploying packages and customizations on conventional Linux distributions


Originally, my build tool was developed to deploy custom Linux distributions based on Linux from Scratch.

At some point during my studies, I had a side job where I was a part time software developer and system administrator. In this job, I had to configure a LAMP stack (Linux, Apache, MySQL, PHP) for hosting my custom developed web applications.

I have used various kinds of Linux distributions, such as Mandrake, Slackware, Fedora Core and Ubuntu. I already knew that it was a bad idea to deploy Linux distributions completely from scratch for production systems that also had to be maintained by my colleagues -- there is too much complexity involved in managing a custom built Linux distribution and it is too time consuming.

Although using a conventional distribution saved me time, it was unavoidable for me to still compile certain packages from source code. For example, my Slackware distribution included a copy of the Apache HTTP server, but it was an old version and it did not include a PHP installation with all the database extensions that I want. Moreover, the included MySQL package was also old. As a result, I had to compile these packages from source code, because there were no alternatives.

I have noticed that the process of building custom packages from source code on a conventional Linux distribution was almost identical to building custom packages on my Linux from Scratch installation.

To automate custom package builds in my side job, I have separated the build tool from the distribution so that it can also be used on top of conventional Linux distributions. I also made additional checkinstall configuration properties configurable, for example, to use checkinstall to produce an RPM or Debian package, rather than a Slackware package.

Naming my build tool


Back in 2001-2009 my build tool did not have its own name -- I named it after my custom Linux distribution that had various kinds of names. I will not go into detail about that, but it may be an interesting discussion for a future blog post. :-)

Because the tool can also be used independently, I have decided to give it its own name: CBT (Conservative Build Tool).

Why is it called CBT? It is a very simple tool that is implemented for one kind of job only: building systems from scratch. It does not do full package management -- package deployment is a much more complicated problem than only building a system.

For example, upgrading already deployed systems is a much harder problem than deploying system from scratch -- when upgrading a package you also need to take its dependencies into account and make sure that they do not break. Moreover, ensuring reproducible builds is also complicated and not something the tool helps you with. There are much better tools out there that take care of these additional deployment tasks.

CBT does not have the intention to change to address these additional issues and become a more advanced tool. That is why call it is called conservative -- it is the inertia to change.

A quick demonstration of its features


As explained earlier in this blog post, in my own build solution I came up with the idea to define build processes as sequences of scripts.

The following file shows an example of a script:

#!/bin/bash -e

source $cbtFunctionsDir/deploySourcePackage
source $cbtBaseDir/settings/testsequence

name=hello
version=2.1.1
group=Tools/Development
description="GNU Hello"
src="$name-$version.tar.gz"

showLongDescription()
{
    cat << "EOF"
The GNU Hello program produces a familiar, friendly greeting. GNU Hello
processes its arguments list to modify its behavior, supports greetings in many
languages, and so on.

The primary purpose of GNU Hello is to demonstrate how to write other programs
to do these things; it serves as a model for GNU coding standards and GNU
maintainer practices.
EOF
}

buildPhase()
{
    ./configure --prefix=/usr
    make
}

installPhase()
{
    make install
}

The purpose of the above script is to build an example package: GNU Hello from source code and install it in the usual directory prefix: /usr.

The script defines the following properties:

  • The script includes a file: deploySourcePackage that provides a function abstraction that conveniently manages build process preventing me to specify steps, such as unpacking the tarball, opening the build directory and removing the build directory.
  • The name attribute is mandatory and specifies the name of the script.
  • The group attribute is mandatory and specifies the group where the script belongs to.
  • The description attribute specifies a description of the package that is used in the selection menu.
  • The src attribute refers to the source tarballs that the build needs. The build abstraction function: deploySourcePackage will automatically unpack the tarball and open the resulting build directory.
  • The showLongDescription function generates a long description of the package describing it in more detail. This long description is also added to the package's meta data.
  • The buildPhase specifies all command-line instructions that need to be executed to build the package from source code. The build infrastructure will also run these commands as an unprivileged user in a protected build directory to prevent a build script from modifying the host system.
  • The installPhase specifies all command-line instructions that need to be executed to install the package. These instructions are executed by checkinstall so that a Slackware package is automatically created and deployed.

The following file shows a partial example of a sequence (sequences/extras):

#!/bin/bash -e

optional="true"

scripts="\
Libraries/Multimedia/smpeg \
Libraries/Multimedia/SDL_mixer \
Libraries/Multimedia/SDL_image \
Libraries/Multimedia/SDL_net \
Applications/hello \
Applications/dia \
Applications/OOo \
Games/supertux \
Games/tuxracer \
Games/duke3d \
...
"

The above file (sequences/extras) is a shell script that sets two variables:

  • The optional variable specifies whether the scripts inside the sequence are optionally executed. By default all scripts are disabled and need to be enabled by the user. If this property is false, then all scripts are considered mandatory and are executed by default unless they are disabled by the user.
  • The scripts variable contains a whitespace separated list of paths to scripts that need to be executed. Applications/hello refers to the script shown in the previous example.

I can use the following command to display a TUI allowing me to select the scripts that I want to execute in a sequence:

$ cbt-cfg-seq sequences/extras

First the TUI will show me the available groups:


After selecting a group, I can select the scripts that I want to execute:


After the selection is complete, I can run the following command to execute my selected scripts in the sequence shown earlier:

$ cbt-run-seq sequences/extras

While running the above command, the first terminal shows me the scripts that are currently executed (I can switch to it with the Alt+F1 key combination). The second terminal (that can be displayed with the key combination: Alt+F2) shows me the actual build output.

If the execution of the sequence takes too long, I can set a breakpoint. For example, to set a breakpoint after the current script in execution I can run:

$ cbt-break-next sequences/extras

In addition to the currently executed script, I can also set a breakpoint after any script in the sequence. Running the following command shows a TUI allowing me to select where to set the breakpoint in the sequence:

$ cbt-break-seq sequences/extras

I can clear the breakpoint with the following command:

$ cbt-clear-brk sequences/extras

In 2001-2009 I was using my build tool to roll out up-to-date Linux from Scratch distributions. I only maintained a single collection of source packages. When I updated packages, I used to discard older versions.

However in 2026, my goal is to use CBT to deploy various kinds of legacy distributions for retro-computing purposes, instead of up-to-date Linux from Scratch distributions. I typically store all my source packages (typically multiple versions of them) in a single catalog.

Sometimes, it can be useful to query the required external files, such as tarballs and patches, that you need to compile packages in a sequence so that you can transfer these files to an installation medium such as a CD-ROM or USB flash drive.

For this purpose, I have developed a new tool that can help me querying the required files:

$ cbt-seq-deps sequences/extras
d/dos2unix/dos2unix-3.1.tar.gz
d/unix2dos/unix2dos-2.3.tar.gz
h/hexedit/hexedit-1.2.3.src.tar.gz
S/SDL_mixer/SDL_mixer-1.2.5.tar.gz
S/SDL_image/SDL_image-1.2.4.tar.gz
S/SDL_net/SDL_net-1.2.5.tar.gz
C/ClanLib/ClanLib-0.6.5-1.tar.gz
C/ClabLib/ClanLib-0.6.5-1-glXGetProcAddressARB.patch
...

The above tool shows all the required external files to execute the scripts in the provided sequence. These files are derived from the files, src, srcs and patches variables of a script or any variable that is specified in the dependencyVariables variable.

Discussion


As a teenager I often felt that when I created something, that it was the best thing in the world. Nowadays, I look at my own work more critically.

With CBT, it became doable for me to roll out my own own customized systems based on Linux from Scratch and deploy source-based packages on top of existing Linux distributions.

It gave me the following benefits:

  • It automates the process making it repeatable.
  • It makes the process interruptable and resumable. This feature helped me to sleep better and to use my computer for work when I need it.
  • It invokes checkinstall so that the lifecycle of deployed packages can be managed by the host system's package manager.
  • It makes it possible to customize my system by allowing me to select which scripts in a sequence can be enabled or disabled.
  • It provides abstractions that perform common house keeping tasks saving me implementation time.

However, there are also many drawbacks. To name a few:

  • No notion of dependencies. Not exactly knowing the dependencies of packages have a number of drawbacks. For example, you must order the scripts manually in a sequence in such a way that no dependency will be missed.

    Moreover, packages may have mandatory and optional dependencies. If a mandatory dependency is missing the build of a package will fail. CBT cannot ensure dependency completeness because dependencies are unknown.

    Furthermore, if you exactly know the dependencies of a package, you can also optimize system deployments by executing package builds that have no dependency on each other in parallel on multi-core/multi-processor machines.
  • No purity/reproducibility. The package binary contents depends on the entire state of the system. This means that building a package today may give a different binary package compare to the same package was built yesterday.

    In a previous situation, fewer packages may be installed and certain configuration aspects were different. There are all kinds of artifacts on a system that may influence a build that you are not aware of. For example, a package may have an undeclared dependency on something that accidentally resides on the system.
  • Weak protection. Although the build phase runs as an unprivileged user preventing the build script to touch anything outside the dedicated build directory, there is no protection against possible harm done in the install phase that runs as root user. While installing a package, files belonging to other packages (or to any of the users) may be removed or overwritten, affecting the operation of the system. In bad situations, you may have to redeploy your entire system from scratch again.

The above drawbacks only state weaknesses that are related to deploying a system from scratch. Deployment becomes even more complicated if you intend to upgrade an already deployed system:

  • Builds are very likely to be impure when you upgrade an existing system. As a result, you cannot be sure that an upgrade delivers the same result as deploying a system from scratch.
  • As mentioned earlier in this blog post, most packages are installed in global namespaces, such as /usr/bin and /usr/lib. If you upgrade packages, then binaries in these directories will be replaced. Moreover, if you upgrade to an incompatible newer version of a package that other packages rely on, you may break a system. Although it is possible by having incompatible packages to co-exist on a system (e.g. by installing them in separate directories), doing this is not trivial.
  • If upgrades get interrupted in the middle, then you may end up having a system in an inconsistent state that is difficult to repair.

An interesting anecdote


Building customized Linux systems system has been a valuable skill to me. In my early side jobs it was immensely helpful to enhance Linux server installations, because I typically had to deploy software package that were not part of a Linux distributions' package repository.

Late 2007, I was looking for an assignment for my Master's thesis. I met my (then master's thesis and PhD thesis supervisor) Eelco Visser. One of the ongoing projects in his research group was software deployment related. One of his PhD students: Eelco Dolstra developed the Nix package manager. By then already a small community was built around it. Moreover, a prototype Linux distribution was built around Nix: NixOS.

Meeting Eelco Visser and Eelco Dolstra confirmed to me that package management was a complex subject. I was very surprised and amazed to see that package management (and software deployment in general) was an academic research topic.

Moreover, I learned that there is much more we can do in the area of package management to make package deployments reliable, reproducible and efficient. For example, I learned that Nix executes package builds much more reliably because all packages were stored in isolation in a Nix store and completely performed as an unprivileged user.

Because of my past experience, I was very motivated to do research in this area. So much that there were hardly any doubts after the completion of my Master's thesis to become a PhD student doing research in distributed software deployment.

Moreover, the reason why I started my blog is to supplement my research with practical information.

Why not using Nix?


Since I am familiar with Nix, NixOS and other kinds of solutions in the Nix eco-system, you may probably wonder why I am using my own primitive solution for this project rather than Nix.

This project is a retro-computing project targeting Linux deployments from early 2002. My tool predates Nix. Nix did not exist in 2002 yet -- its development started in 2003.

Furthermore, in 2003 Nix was only a research prototype. It took a few years before it became mature enough to be used from a practical point of view.

Using a modern Nix version is also not an option -- it uses modern C++ practices that cannot be used with the old GCC version that my custom Linux system includes. Moreover, backporting Nix is all but trivial -- Nix is not a small code base and has many complex features.

Moreover, even if I could backport a modern Nix implementation, there are also substantial costs in using its features -- for example, it requires a substantial amount of RAM to evaluate complex package sets. My retro PC does not have enough RAM to do that.

Despite the limitations of CBT, a strong advantage is that it does not require much system resources and complex dependencies.

Conclusion


In this blog post, I have explained my motivation for building customized Linux distributions and explained the most important ingredient to make that process doable: developing my own (somewhat primitive) custom build tool.

I have been actively using this build tool between 2001-2009 to deploy customized Linux distributions based on Linux from Scratch. For many years, I have used my customized Linux distribution on most of my personal systems.

After not having touched the tool in years, I have revived it for my retro computing experiment with the purpose to build a custom Linux distribution from early 2002 for my retro PC. Moreover, I have also used CBT to automate package builds on my Slackware 8.0 installation.

Since I have a use case for it and I have invested a considerable amount of time and effort in it in the past, I have decided to release CBT on my GitHub page. Back in 2001-2009 I always had the intention to release the tool and the distribution as free and open-source software, but the younger version me always procrastinated with one thing: writing decent end-user documentation. I have decided to fix that now.

Although I have not been deploying any modern Linux from Scratch installations in years and I have little motivation to do that now, I can still strongly recommend the Linux from Scratch book to anyone who wants to learn about the structure of Linux systems and how to deploy software from source code.

In my next blog post, I will show the legacy Linux distribution that I constructed for my retro PC.

Tuesday, December 30, 2025

15th annual blog reflection

Today it's my blog's 15th anniversary. As usual, this is a nice opportunity to reflect over last year's writings.

Some reflection


In 2024, it was very quiet on my blog. I have only written one blog post (about configuring my recently acquired Amiga 4000 machine) and I was not happy about my progress. This year I have managed to improve my cadence by breaking up some of my projects into smaller chunks and trying not to multi-task too much.

Amiga development


In 2025, I resumed my Amiga-related fun projects. After configuring my second hand Amiga 4000 in such a way that I find it acceptable, I have decided to install and run Linux on it -- running Linux on an Amiga has always been a fascinating use case to me. Already in 2001, I gathered quite a bit of knowledge about Linux, its internals and its portability aspects.

Since 2001 I have been curious to see how it would work on an Amiga machine, rather than a standard x86-based PC. Unfortunately, until 2022 I only had access to an Amiga 500 that is incapable of running Linux -- an Amiga 500 contains the first generation Motorola 68000 processor lacking a memory management unit (MMU) which is a hard requirement for running Linux. After obtaining my Amiga 4000 that has a 68040 CPU (with an MMU) I could finally see how it would work in practice.

It turns out that running Linux on an Amiga is quite a challenge. There is some information available on the Internet and a Linux distribution that (somewhat) supports it: Debian. Unfortunately, much of the relevant information that I need is scattered, sometimes outdated, and not always well-written. As a result, I have decided to write a blog post about my experiences, so that all information can be obtained from a single location.

I was only expecting a handful of people to find such a blog post interesting. At first, it did not attract that many visitors. A couple of months later, it appeared on various news web sites, such as Amiga News, Hackernews, Reddit, and OS news. As a result, it not only reached the Amiga community, but also a broader development community.

Thanks to this wide exposure, my Linux/Amiga related-blog post is now one of my most frequently read blog posts. I am quite happy to see such a broad exposure -- although the Amiga is nowadays mostly an obsolete platform, the Linux kernel still supports it. I hope my blog post makes Linux on Amiga more useful to people who want to explore or improve it.

In 2022, I learned that in addition to Linux, NetBSD (another UNIX-like operating system) also supports the Amiga and substantially improved its Amiga support. After experimenting with Linux on my Amiga 4000, I have also decided to give NetBSD a try and report about my experiences. This blog post was also covered on the same news sites and also attracted quite a few visitors.

Another interesting Amiga project that I worked on was the ability to mount my KCS PowerPC board-emulated PC hard-drive in AmigaOS and Linux so that I can conveniently exchange files (to clarify: my Amiga 500 contains an extension card making it possible to emulate a PC and run PC software).

Previously, I had to rely on floppy disks or a null modem cable to exchange files with my KCS PowerPC board-emulated PC installation, which is slow and inconvenient.

In 2025, I got quite frustrated by this limitation. Contrary to my emulated PC instance, it is possible to easily exchange data with my Linux and NetBSD installations from AmigaOS and Linux -- there are ext2 and Berkeley Fast File system modules for both operating systems.

Already in the 90s I was convinced that these problems could be solved because AmigaOS is flexible enough to support many kinds of file systems through external AmigaDOS drivers. There are also several kinds of FAT file system DOS drivers for AmigaOS, but none of them work with my emulated PC drive. Unfortunately, in the 90s I had neither the knowledge, nor the resources to fully solve that problem.

In 2025, I revisited the problem and wrote two blog posts covering my solutions: an AmigaOS Exec driver and a network block device (NBD) driver for UNIX-like systems, such as Linux.

Building a retro PC


After completing my Amiga projects, I became motivated to build a retro PC with late 90s hardware, supplemented with a couple of modern peripherals (such as a CF2IDE and a GoTek floppy emulator device) to make data exchange more convenient.

The motivating reason to build this retro PC is because I learned that the ability to run old games on modern computers is not as good as I thought it would be. Furthermore, I believe it is good to preserve some significant historical computer technology.

Conclusion


I am happy to have seen some improvements and that some of my writings had some impact.

I have plenty of ideas for 2026, so stay tuned.

The last thing I would like to say is:


HAPPY NEW YEAR!!!!

Tuesday, October 21, 2025

Building a late 90s retro PC


Last year I went to the cinemas and enjoyed watching Dune: Part Two, a movie adaptation of the first Dune novel written by Frank Herbert. I still have fond memories reading the novel for my English literature classes at secondary school -- the book is very detailed and it took me quite a bit of effort to read it. Fortunately, I already knew many parts of the storyline thanks to a computer game: Dune 2000, that I enjoyed very much playing in the late 90s.

After seeing the movie I wanted to play Dune 2000 again. The game is quite old -- it was released in 1998 by Westwood Studios. Originally, I ran the game on my parents' PC, a Pentium 166 with 32 MiB RAM running the first edition of Windows 98.

Running the game on a modern PC turned out to be quite challenging. Although some hardware in PCs should be fairly backwards compatible with even the earliest PC models (for example, modern CPUs can still run real mode instructions that were commonly used in the MS-DOS era in the 80s and early 90s), compatibility on a modern machine with an old PC software product is all but guaranteed.

On Windows 11, Dune 2000 will not work out of the box. Windows 11 requires a 64-bit CPU and runs in 64-mode. 64-bit versions of Windows have dropped compatibility for 16-bit Windows and MS-DOS applications. Although the Dune 2000 game is a 32-bit executable, the installer turns out to be 16-bit.

Fortunately, I found an alternative installer on the PC Gaming Wiki developed by the retro computing community allowing me to install and play Dune 2000 on Windows 11. Although this installer made it possible to play Dune 2000 for a while, I have noticed that Windows 11 updates can also break the playability of the game again.

I also tried a few other games from the late 90s and for many of them I learned that it is quite hard to make them run properly on Windows 11. As a result, I was not happy with the current state of affairs when it comes to backwards compatibility.

Some time later, because of the lack of good application compatibility in Windows 11, I got motivated to install Windows 98 Second Edition (SE) in PCem, a PC emulator allowing me to set up an emulated machine that was comparable to the first PC that I bought myself in the late 90s. A bare bones Windows 98 installation is somewhat impractical to use so I also started to collect useful drivers and utilities that I used at that time.

After my experiences setting up an old PC software configuration (Windows 98 SE) and working with retro computers (various Commodore models), I became motivated to build a retro PC that is comparable to the first PC I first bought from my own money as a teenager in 1999.

Before 1999 I only owned obsolete computers, such as the Commodore 64 and Amiga 500, which explains why I have quite a bit of experience with them. I had to share my access to a modern PC with other family members. Saving the money to buy a modern computer of my own took me quite a bit of time and effort.

In this blog post, I will report about my experiences building my retro PC and describe my experiences using it.

Finding a good base machine


All of my desktop PCs that I used to own were custom built. Building your own PC from parts is an interesting process -- I always used to enjoy looking at the specifications of various hardware components and see how they can be combined to assemble a reasonably priced machine. In the Netherlands, there are quite a few shops that you can buy computer parts from.

Contrary to modern PCs, finding the right parts for old machines is extra challenging, because most of them cannot be found in conventional shops. Fortunately, there are web sites such as eBay and Marktplaats (a Dutch e-commerce web site for trading second hand goods) that offer many kinds of used goods, including computer parts and peripherals.

In 1999 I used to have an Abit BE6 motherboard. Back then I was in doubt between this motherboard and an ASUS P2B. I have picked the Abit BE6 motherboard, because it had an Ultra ATA 66 controller that is supposed to be faster than a conventional IDE controller.

After doing a search on Marktplaats, I found a second hand custom built PC containing an ASUS P2B-F motherboard and a number of parts that are close to my desired system configuration:

  • Motherboard: ASUS P2B-F. This is a motherboard model I specifically looked for.
  • CPU: Intel Pentium III 450 MHz. This CPU is slightly slower than I used to have. My first PC had a 500 MHz model.
  • 192 MiB RAM. My first PC used to have 128 MiB of RAM. It is fine to have a little bit more.
  • Graphics card: ATI Rage 128. This card was comparable in features and performance to a NVIDIA RIVA TNT card using true-color 3D graphics.
  • Sound card: SoundBlaster PCI 128. A basic sound card with decent DOS compatibility.
  • Network card: RealTek 8139
  • Floppy drive: A 1.44 MB 3.5-inch
  • CD-ROM drive: A Lite-On (52x speed)
  • Hard drive: A 8 GiB Maxtor N256 IDE

Finding additional parts


I made the following adjustments to the machine's composition so it that becomes closer to the machine I used to own in the late 90s:

  • I replaced the ATI Rage 128 video card with a Diamond Viper V770 (containing a NVIDIA RIVA TNT2 chipset) that I found on eBay. This is the same kind of video card I used to own in the late 90s and is more powerful than the ATI card.
  • My 90s machine used to contain a more fancy sound card: a Sound Blaster Live! I have replaced the Sound Blaster PCI 128 with a Sound Blaster Audigy 2 card that I have removed from one of my previous computers. The Audigy 2 is an improved version of the Sound Blaster Live!
  • My 90s machine was also upgraded with a DVD-ROM player. As a result, I was able to watch DVD movies on it. On Marktplaats I found a very cheap LG DVD-ROM player that is also capable of burning writable CDs.
  • I used to have a Microsoft Sidewinder USB joypad to play games with. I found the exact same model on Marktplaats.

In addition to replacing traditional parts, I have also decided to buy some additional parts to make retro-computing more convenient:

  • A GoTek floppy emulator device. Similar to my old 8-bit Commodore and Amiga computers I am running into the issue that floppy drives have become an inconvenient medium. Modern PCs no longer have it included by default. Although I still have an external USB floppy drive that I can use, it still remains inconvenient, because floppy disks are slow and have limited storage capability.

    Similar to my old Commodore computers, it is also possible to use a GoTek floppy emulator device as a substitute for a physical floppy drive in a PC. Rather than floppy disks, a GoTek allows me to use a USB memory stick with disk images. I can use the rotator to select the disk image I want to use.

    My motherboard only has one floppy drive socket, but it is possible to connect both the physical floppy drive and the GoTek floppy emulator at the same time with a single cable with two connectors.

    I have made the GoTek floppy emulator the primary disk drive, but I can also still use the original floppy drive as a secondary disk drive if I want to. The fact that the traditional floppy drive can still be used is convenient for backing up content from old physical floppy disks.

    I have followed the configuration instructions on this documentation page to configure my GoTek -- I have created a directory named: FF on my USB stick and added a configuration file named FF.CFG with the following settings:

    interface = ibmpc
    display-type = oled-128x64-rotate
    

    The above settings specify that I have an IBM type disk drive and I have rotated the display text so that it no longer appears upside down.
  • A CF2IDE device. Hard drives will not last forever. Similar to my Amiga 500's hard drive, the Maxtor hard drive in my retro PC is showing age related problems. For example, it makes a weird continuous buzzing sound.

    I have decided to replace the hard drive with an CF2IDE device, similar to my Amiga 4000. This device allows me to use Compact Flash cards as a replacement for physical hard drives.

    Another advantage (similar to my Amiga's SD2SCSI and CF2IDE devices) is that I can conveniently switch memory cards at the back of the machine allowing me to easily work with many kinds of software installations.

Software configurations


As already explained, a CF2IDE device allows me conveniently switch memory cards at the back of the machine. As a result, I have been able to produce a number of interesting software configurations that I can easily experiment with.

Windows 98 SE


The first configuration I produced is a working Windows 98 Second Edition (SE) installation. At the time I bought my own PC in 1999, this was the mainstream operating system commonly used on consumer PCs. I still had a copy lying around.

Back in 1999 there were two concurrent Windows product lines: the most commonly used versions were DOS/Windows hybrids starting with Windows 95, followed by 98, 98 second edition, and Millennium Edition (ME). I will call this product-line the Windows 9x product-line.

Although these Windows versions appear to be "modern" graphical operating systems, they still carry much of Microsoft's MS-DOS legacy -- you could still clearly observe that they consist of a DOS and Windows part: first, the system boots into the text-based MS-DOS mode, optionally loading DOS device drivers and DOS programs, such as TSRs, and then the Windows desktop environment is started. By using the boot menu (that can be reached by pressing F8 on startup), it is also possible to boot into MS-DOS mode, if desired.

There was also the NT-product line starting with Windows NT 3.1, that from a visual perspective looked very similar to Windows 3.1, but was developed from the ground up to be more powerful, portable and robust.

In mid 1999, Windows NT 4.0 was the latest version in the NT-product line having a graphical shell that looked similar to Windows 95.

Although the Windows NT product-line was more promising and gained some adoption for business use, it was not yet used on a wide scale by consumers because it requires much more system resources and its backwards compatibility with MS-DOS applications and games was not as good as the DOS/Windows hybrids. Moreover, Windows NT lacked popular multimedia features such as hardware accelerated Direct3D graphics.

At the end of 1999, Windows 2000 was released as the successor to Windows NT 4.0, improving various features including multimedia support.

I consider Windows 98 SE to be the best option in the Windows 9x product-line for playing games and running commonly used Windows software from the late 90s. 98 SE has more features than previous versions, but not the mistakes of its successor: Windows Millennium Edition.

Windows Millennium Edition has added a few nice features, such as support for more modern hardware, but it removed the option to boot into MS-DOS mode, preventing me from playing a number of interesting MS-DOS games. Moreover, Windows Millennium Edition also had quite a few stability problems.

Installing Windows 98 SE and most of the software packages was generally straight forward. For example, I can install Windows 98 SE relatively straight forward by booting from a Windows 98 boot disk and running SETUP.EXE from the CD-ROM.

Configuration challenges


There are a number of configuration challenges that I ran into while setting up a Windows 98 SE installation:

Intel chipset driver


By default, the Intel 440BX chipset is not detected by Windows 98 SE. I had to install a driver for it. I installed this version that I found one online.

Realtek network card driver


My network card was also not recognized -- I installed this driver that I found online.

Video card driver


Windows 98 SE does not automatically detect my graphics card. To use my card, I need to install an external driver provided by NVIDIA. There are a variety of driver versions available for my chipset (RIVA TNT2).

The last driver version that still supports Windows 98 is version: 71.84, but I learned that it has all kinds of issues. For example, running the 3DMark 2000 demo does not seem to work at all.

After some searching I learned that it is better to use an older driver version for older NVIDIA cards. After experimenting with a variety of versions, I had pretty good results with version 28.32 -- all the 3D applications that I want to run on my computer seem to work with it.

Old MS-DOS tools


One of the objectives of my Windows 98 SE configuration is to run MS-DOS applications and games. As explained earlier, the Windows 9x product-line is basically a DOS/Windows hybrid. As a result, it has pretty good compatibility with many MS-DOS applications.

Windows 95, 98 and 98 SE include a number of MS-DOS utilities in the default installation, but compared to MS-DOS 6.22 (the last independent MS-DOS release) its features have been trimmed down considerably.

Fortunately, some of the missing MS-DOS tools can be found on the Windows installation disc: the D:\TOOLS\OLDMSDOS. For example, when I was younger I typically wanted to use MS-DOS QBASIC, which was not included in the default installation of Windows 95 and 98. I could still get access to it by copying it from the old MS-DOS tools directory to C:\DOS and adding this directory to the PATH environment variable in AUTOEXEC.BAT:.

PATH C:\WINDOWS;C:\WINDOWS\COMMAND;C:\DOS

I have also learned that in each new Windows version in 9x product-line the amount of old MS-DOS facilities shrinks somewhat. I also still have a copy of Windows 95 OSR1 lying around that contains more classic MS-DOS utilities than the Windows 98 SE disc, including MEMMAKER which turns out to be quite handy.

Instead of using the MS-DOS supplement utilities for Windows 98 SE, I prefer to use the utilities from the Windows 95 OSR1 CD-ROM.

Using USB memory sticks


Earlier in this blog post I have explained that floppy disks are quite an inconvenient medium to use nowadays. A modern means to exchange data physically is to use USB memory sticks.

My retro PC has two USB ports available, so I also wanted to use USB memory sticks for data exchange. Unfortunately, back in 1999 USB storage devices were not yet standardized -- each vendor provided their own driver. For modern USB storage devices, Windows 98 drivers are no longer provided. As a result, Windows 98 SE fails to detect USB memory sticks.

Fortunately, I learned that somebody has created a general USB storage driver package for Windows 98 that should work with any USB storage device. I used version 3.6 and it is straight forward to install.

Although the driver works great, I learned that it was created by integrating a number of components from Windows Millenium Edition, the successor of Windows 98 SE.

The negative side effect of installing the driver is that some elements of my Windows installation have lost their locality settings -- I am using a Dutch version and some Windows Explorer elements have changed to English.

After some searching, I learned that I can extract the original versions of the affected files from the Windows 98 SE CD-ROM by running the following commands in an MS-DOS prompt (D: corresponds to the CD-ROM device):

C:
MD \TEMP
CD \TEMP

EXTRACT D:\WIN98\WIN98_25.CAB SYSDM.CPL
EXTRACT D:\WIN98\WIN98_41.CAB USER32.DLL
EXTRACT D:\WIN98\WIN98_43.CAB EXPLORER.EXE
EXTRACT D:\WIN98\WIN98_44.CAB SYSTRAY.EXE
EXTRACT D:\WIN98\WIN98_45.CAB USER.EXE

I can restore the files by booting the machine into MS-DOS mode (e.g. by pressing F8 on bootup or using the shutdown function) and running the following commands:

C:
CD \TEMP

MOVE SYSDM.CPL C:\WINDOWS\SYSTEM
MOVE USER32.DLL C:\WINDOWS\SYSTEM
MOVE EXPLORER.EXE C:\WINDOWS
MOVE SYSTRAY.EXE C:\WINDOWS\SYSTEM
MOVE USER.EXE C:\WINDOWS\SYSTEM

The only disadvantage of restoring the original files is that the systray icon (that can be used to inspect the status of the USB storage device) can no longer be used. However, I can still reliably unmount a USB drive by opening "My Computer", right clicking on the device and selecting the: "Eject" function.

Mouse driver for MS-DOS


In Windows mode, my PS/2 mouse works out of the box, because it includes a driver. Unfortunately, in MS-DOS mode I need to obtain a driver myself.

To get the mouse working I have downloaded cutemouse -- it is originally developed for FreeDOS, but it also works decently in Windows 98's DOS mode. Its only disadvantage is that requires a bit of conventional or upper memory, so I only load it when I need it.

CD-ROM driver for MS-DOS


Similar to my mouse, in Windows mode my DVD-ROM works out of the box, because Windows has a driver for it. In MS-DOS mode, I also need to install a driver myself.

I found this OAK CDROM driver on VOGONS that seems to be compatible.

By adding the following line to CONFIG.SYS I can load the driver:

DEVICE=C:\DRIVERS\VIDE-CDD.SYS /D:MSCD001

By adding the following line to the AUTOEXEC.BAT file I can mount the CD-ROM drive on startup:

MSCDEX /D:MSCD001

Sound card driver


Another challenge was to make the sound working properly. Although I still have the original driver CD-ROM that includes a convenient installer, I ran into some challenges.

At first, I just used the CD-ROM to install the driver by following the recommended steps. I did not install Creative MediaSource, because I do not need it -- it is an application to turn your Windows installation into an entertainment system. I prefer to use my own selected applications to accomplish the same goals.

There are two driver variants on the CD-ROM, a Windows Driver Model (WDM) and a VxD variant. By default, the Audigy 2 installation disc installs the WDM driver on Windows 98. WDM is a new driver model introduced with Windows 98, but it was not yet successfully used until Windows 2000.

I have noticed two major limitations of using the WDM driver in my Windows 98 SE setup:

  • When running some games, things get silent after playing for a while.
  • When running MS-DOS games in an MS-DOS box in Windows, I only have SoundBlaster Pro compatibility. Moreover, I do not have any sound when I try to run MS-DOS applications in MS-DOS mode.

After some searching, I learned that the VxD driver is more robust on Windows 98 SE (and other Windows versions in the Windows 9x product-line).

I can switch to the VxD driver by starting the following program in the start menu: Programs -> Creative -> Utility -> Driver Utility Program. In the utility program, I can give the instruction to install the VxD driver.

Then you must reboot your computer. At the next boot up, Windows 98 SE sees some new hardware for which no driver can be found. This message should be ignored. Once the system has booted, the VxD drivers will be installed. After another reboot, the sound card is detected and can be used again.

I also knew that the first generation card of this product-line: the SoundBlaster Live! had SoundBlaster 16 compatibility. I learned that it is also possible to have MS-DOS SoundBlaster 16 compatibility with my Audigy 2 card, but it is not supported by official means -- there is an unofficial DOS driver pack that can be obtained from the VOGONS forums.

Enabling Sound Blaster 16 compatibility is quite tricky. First, I must make sure that there is a free IRQ channel. By default, IRQ 7 is taken by the LPT1: port. Disabling it in the BIOS frees it up so that the sound card uses IRQ 7.

We can use IRQ 5 for Sound Blaster 16 emulation. I learned the hard way that it is best to change the BIOS setting before the installation of the sound driver -- doing so afterwards bricked my Windows installation. Windows was refusing to boot due to an IRQ conflict.

Then, I must open the registry editor (regedit.exe) and change the following setting:

[HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Creative Tech\Emu10kx\Emulation]
"EnableSB16Emulation"=dword:00000001

After changing the setting, I must reboot the machine. At first startup, Windows should detect a new device that is recognized as a "Creative SB16 Emulation" device. Although Windows manages to install a driver, it shows that there is a problem when I open the device manager. I should ignore this error for now.

Then I must install an unofficial Audigy 2 DOS pack from VOGONS. This package includes a number of files from the Sound Blaster Live! installation disc to enable Sound Blaster 16 emulation.

I must perform the following steps:

  • Unpack dospack RAR file
  • Run the installer: AUDIGY DOS DRIVER\Setup.exe
  • Do NOT reboot
  • Copy AUDIGY12.EXE from the AUDIGY12 PATCH/ directory to C:\Program Files\Creative\DOSDrv
  • Edit AUTOEXEC.BAT, add following line right after SBEINIT.COM:

        C:\PROGRA~1\CREATIVE\DOSDRV\AUDIGY12.EXE
        
  • Reboot the machine

Finally, after rebooting there is still the broken "Creative SB16 Emulation" device. We must manually reinstall it to get a working device, by executing the following steps:

  • Control Panel -> Add New Hardware
  • Select: "No, the device isn't in the list"
  • Then select: "No, I want to select the hardware from a list"
  • Type of hardware: "Sound, video and game controllers"
  • Manufacturer: Creative Technology, Ltd.
  • Model: Creative SB16 Emulation

After installing the device by following the above procedure, there is a second "Creative SB16 emulation" device that is not reported as broken.

As a sidenote: the previously installed device that is also called: "Creative SB16 Emulation" still remains visible in the device manager and remains reported as broken -- ignore it. It is a weird situation, but the outcome is that we have working SB16 emulation.

Mounting CD-ROM images


In addition to floppy disks, my retro PC can also work with another kind of physical media: CD-ROM and DVD-ROM discs. For games, for which I have original copies, this is a fine medium.

However, I consider working with writable CD-ROM / DVD-ROM discs a bit inconvenient -- for example, I have downloaded a DirectX update ISO CD-ROM image that I need to access somehow. Writable CD-ROM and DVD-ROMs are a bit impractical, because it takes time to produce them and they are not very reliable in the long run -- I had quite a few writable discs in the past that became inaccessible after a few years.

Fortunately, it is also possible to put CD-ROM and DVD-ROM images on a USB memory stick and get access to their content by using a Virtual CD-ROM / DVD-ROM drive.

I have installed Daemon Tools version 3.47 to make this possible -- it is not the latest version for Windows 98 SE, but this version is still small / light-weight and works decently.

Optimizing memory for MS-DOS applications


Another challenge of running MS-DOS applications (that has not changed much since the MS-DOS days) is dealing with memory -- some MS-DOS games require a substantial amount of free conventional memory to run.

After installing all required MS-DOS drivers, e.g. the CD-ROM driver and Soundcard, I can optimize the amount of free conventional memory by running MEMMAKER. I need to reboot the machine into MS-DOS mode (by pressing F8 on startup) and run the following command on the command-line prompt:

MEMMAKER

I have used the following settings:

  • Use Express or Custom Setup? Express Setup
  • Do you use any programs that need expanded memory (EMS)? No
  • Specify which drivers and TSRs to include in optimization? No
  • Scan the upper memory area aggressively? Yes
  • Optimize upper memory for use with Windows? No
  • Use monochrome region (B000-B7FFF) for running programs? No
  • Keep current EMM386 memory exclusions and inclusions? No
  • Move Extended BIOS Data Area from conventional to upper memory? Yes

Finally, after MEMMAKER has made its changes, I typically check the CONFIG.SYS file to see if DOS is loaded as follows:

DOS=HIGH,UMB

By using these settings I have managed to increase the amount of free conventional memory from 530K to 616K in my default boot configuration.

Although using MEMMAKER increases the amount of free conventional memory, this configuration is not perfect. I have picked this configuration because it works with most of my applications.

Unfortunately, not all my MS-DOS applications will work with this setup -- I have disabled expanded memory (EMS) emulation, but enabled access to the Upper Memory Block (UMB) by loading the EMM386 with the NOEMS parameter. A few of my MS-DOS applications require EMS memory, such as a DOS game called: Zorro. On the other hand, I also have some MS-DOS applications that cannot work at all with an EMM driver, such as the PC port of Turrican 2.

For the applications that will not work with my main setup, I can configure the PIF files in Windows in such a way that the Windows boots into a different MS-DOS configuration, with or without an EMM driver enabled.

The only unfortunate side effect of booting into a configuration without an EMM driver is that I have no sound -- the SoundBlaster 16 emulation DOS TSR requires it.

Experience


I am happy with my Windows 98 SE setup. I can play many of the classic Windows games that I liked in the late 90s, such as Dune 2000, Quake 2, Half-Life and Unreal:


I can also use my Sidewinder joypad to play games, such as Mega Man X3:


Moreover, having decent MS-DOS application compatibility also makes it possible to play games such as Duke Nukem 3D and Jazz Jackrabbit:


To play Jazz Jackrabbit, I have followed this suggestion on the Jazz 2 online forums to get it fixed -- my retro PC is running too fast causing a division by zero error (Runtime error 200 at 0009:37F2). This error happens frequently with running applications written in Turbo Pascal on fast computers. Jazz Jackrabbit was also written in Turbo Pascal. By downloading the TPPATCH program and patching the FILE0001.EXE executable the problem was solved.

Furthermore, I got many interesting applications working, such as WinAmp, Encarta 2005, Encore (that I used to typeset sheet music) and the classic Visual Basic 6.0 (that I used to program frequently with):


Slackware 8.0


The second configuration I have produced is Linux based: a Slackware 8.0 installation

Learning about Linux


I have an interesting history with Linux. From computer magazines and the documentation of DOS UAE (a port of the UAE Amiga emulator to MS-DOS), I already knew about Linux's existence and how to use some of the frequently command-line utilities early 1997.

At first I did not see it as something serious/valuable -- in 1997 I was not convinced that a student from Helsinki University could compete with Microsoft Windows and develop an operating system that could do certain things better.

Late 1999, my cousin demonstrated how Linux worked and gave me some background information. As a result, I got some hands on experience with Linux. I tried various distributions, such as SuSE, RedHat and Corel Linux.

I also learned much more about Linux, free software, and how it is developed -- it was developed with the purpose of providing a free (as in freedom) version of UNIX, which was already a well established concept.

Linux is only an operating system kernel -- to produce a usable system, it is typically combined with packages from other free and open-source software projects (most notably the GNU project, but also many others, such as KDE, GNOME and the X Window System) to become a functional UNIX-like system. All these projects are developed in a community-driven fashion by all kinds people all over the world.

There are many kinds of projects that produce usable Linux-based systems by combining various kinds of software packages. These systems are called Linux distributions.

From this background information and my hands on experience, I recognized its potential.

My early installation experiences


Somewhere in 2000, I tried installing a Linux distribution on my own machine. I believe the first distribution that I tried to install is SuSE 6.2. The installation was a failure -- the installer failed to detect my hard-drive because it was attached to the Ultra ATA 66 controller, which was not recognized.

I left my failed Linux installation experience alone for a while and some time later I gave RedHat Linux 7 a try -- initially, I ran into the same problem as SuSE 6.2 (not recognizing my Ultra ATA 66 controller). Eventually, I detached the hard drive from the Ultra ATA 66 controller and attached it to the original Ultra ATA 33 controller, making it for me possible to get a working Linux experience on my own machine.

After a search on the Internet, I discovered that a driver for my Ultra ATA 66 controller was developed and released as a separate kernel patch. I had to download the Linux kernel source code, apply the patch to the source code tree and compile the kernel from source code.

I followed the Linux kernel documentation closely, but no matter what I tried, I ran into compilation errors. At some point, I learned that my version of RedHat Linux was using a weird version of GCC (version 2.96 to be precise) that was never officially released by the GNU project. Back then, the recommended compiler to use for the Linux kernel was GCC 2.95.2. Apparently, this unofficial 2.96 compiler had issues the compiling the Linux kernel.

Then some more time passed and somebody suggested Slackware to me. This was the first Linux distribution I was happy with -- not because the distribution is perfect, but because it was not as heavily customized as the major distributions (preventing me from running into in weird, poorly documented problems) and easy to make modifications to.

Its user experience was not better than SuSE or Red Hat. For example, Red Hat already had a graphical installer, and Slackware was text based. Moreover, the KDE desktop experience in Slackware was a bit unpolished -- some desktop applications were not visible in the program launcher menu. I had to manually add them or start them from the command-line.

I was happy to see that compiling things from source code works as predicted. On Slackware 7.1, I have managed to successfully compile a modified kernel for my Ultra ATA 66 controller allowing me to use the full potential of the hardware in my computer.

Some time later in 2001, I installed the successor version: Slackware 8.0, which included KDE 2.1 and a driver for my Ultra ATA 66 controller. In RedHat Linux 7 I found the GNOME desktop appearance somewhat more appealing than KDE 1.1, but KDE 2.0 was a huge leap forward for me. It gave me a comparable desktop experience to Windows 98/2000. I have been using KDE as my primary Linux desktop environment ever since.

Interesting applications


The late 90s and early 00s were also interesting to me from an application perspective for Linux -- at the time Windows 98 was launched, Microsoft was in the news quite a lot because of Antitrust issues. For example, Microsoft had some huge competitive advantages by bundling Internet Explorer with Windows to compete with Netscape.

Some software vendors, most notably competitors of Microsoft, were actively looking at Linux as an alternative. As a consequence, a number of interesting commercial applications became available for Linux, such as Netscape, Real Player, and Adobe Acrobat reader. As a "serious rival" to Microsoft Office, there was StarOffice (its code base was eventually open sourced. As of today it is still actively developed as LibreOffice).

Moreover, in the gaming area many interesting things happened. Already in the mid 90s, some prestigious commercial games became available for Linux, most notably games from Id software.

There was a company called Loki Games that developed Linux versions of many popular Windows games, such as Quake 3 arena, Unreal Tournament, Sim City 3000, and Soldier of Fortune. From a commercial perspective Loki games did not do well and went out of business in 2002.

Although its commercial business was a failure, its legacy lives on: the company has demonstrated that Linux is a viable platform for games. Moreover, it developed free and open-source technology to make developing games on Linux easier, most notably SDL and OpenAL. As of today, these libraries are still frequently used by many games and multimedia applications.

Configuration challenges


I have decided produce an old Slackware 8.0 configuration, because this is the latest Slackware version I used on my first computer. It provides a decent KDE 2.1 desktop experience -- in my opinion this version of KDE provides a comparable desktop experience to Windows 98 and 2000.

Another objective is to run interesting applications and games from the late 90s, early 2000s. Most notably running some games from Loki games was high on my list of objectives.

I ran into a number of configuration challenges that I will explain in the next sections.

Upgrading the Linux kernel


Slackware 8.0 uses relatively old versions of the Linux kernel even for late 2001, early 2002 standards. By default, it recommends version 2.2.19, because it is considered more stable. Version 2.4.5 is provided as an alternative.

Linux 2.4.5 is missing quite a bit of functionality to optimally use my hardware. For example, it uses an old version the Open Sound System (OSS) that does not include a driver for my Audigy 2 card. Moreover, USB support is also limited -- I cannot, for example, use my Microsoft Sidewinder USB gamepad.

To improve the situation, I have upgraded the Linux kernel to version 2.4.23. I can still download the source tarballs of old 2.4 versions from kernel.org.

After downloading the tarball, I can unpack it as follows:

cd /usr/src
bzip2 -dc linux-2.4.23.tar.bz2 | tar xfv -

After unpacking the source tarball, it is recommended to clean the source code tree first:

cd linux-2.4.23
make mrproper

With the following command-line instruction, I can copy of Slackware's Linux 2.4.5 kernel configuration into the kernel source code tree and configure it in such a way that it works with version 2.4.23:

cp /boot/config .config
yes "" | make oldconfig

Then I need to make some adjustments to the imported kernel configuration. One way to do this is by using the menu-based configuration tool, by running:

make menuconfig

One of the adjustments that I need to make is to enable USB Human Interface Device support so that I can use my joypad:

USB support ->
<M> USB Human Interface Device (full HID) support
[*]   HID input layer support
[*]   /dev/hiddev raw HID device support

After enabling the extra kernel settings, I can run the following instructions to build the kernel image and corresponding kernel modules:

make dep
make bzImage
make modules

I can install the kernel modules as follows:

make modules_install

We must also make some modifications so that we can boot our new kernel. I can copy the kernel image and related artifacts (the config and system map) to the boot directory as follows:

cp -v arch/i386/boot/bzImage /boot/vmlinuz-2.4.23
cp -v System.map /boot/System.map-2.4.23
cp -v .config /boot/config-2.4.23

To make it possible to boot into our new kernel configuration, I must add the following entry to LILO: the bootloader's configuration (/etc/lilo.conf):

image = /boot/vmlinuz-2.4.23
  root = /dev/hda6
  label = Linux-2.4.23
  read-only

And instruct LILO to update the MBR, by running:

lilo

After rebooting the machine, I can boot my new kernel by selecting the entry: "Linux-2.4.23" in the LILO boot menu.

Sound support: installing ALSA


As explained earlier, one of the drivers that the Linux kernel is lacking is a sound card driver for my Audigy 2. It turns out that even the Open Sound System (OSS) in kernel version 2.4.23 does not include one.

In 2002 a new Linux sound sub system was still in heavy development: the Advanced Linux Sound Architecture (ALSA). In Linux 2.6, ALSA replaced the Open Sound System (OSS). At first, it was released as a separate package. ALSA seems to have a driver for my sound card.

I can complement my 2.4.23 kernel with ALSA and the corresponding driver for my card, by downloading the ALSA driver 0.9.8 tarball and running the following commands to build and install it:

./configure \
  --with-moddir=/lib/modules/2.4.23/kernel/drivers/sound \
  --with-kernel=/lib/modules/2.4.23/build \
  --with-sequencer=yes \
  --with-oss=yes \
  --with-isapnp=no \
  --with-cards=dummy,emu10k1
make
make install
./snddevices

In addition to the ALSA driver package, I also had to compile and install the ALSA library, ALSA OSS compatibility and ALSA Utilities packages. Installing them was straight forward by running the standard GNU Autotools build procedure: ./configure; make; make install.

Finally, after installing the packages, I need to do some configuration steps so that the kernel modules are loaded on startup and that the sound settings are restored. I can do that by adding the following lines to /etc/rc.d/rc.local:

# Load kernel sound modules
modprobe snd-emu10k1
modprobe snd-mixer-oss
modprobe snd-seq-oss
modprobe snd-pcm-oss
modprobe snd-seq-midi

# Restore volume settings
alsactl restore

By default, the sound is muted. We can adjust the volume by using the alsamixer:

$ alsamixer

I typically set the volume levels of the master and PCM channels to 74.

I can save my mixer settings by running:

$ alsactl store

Installing the NVIDIA Linux driver for my graphics card


2D graphics work decently out of the box in Slackware 8.0. XFree86, the X Window System distribution commonly used on Linux systems at that time, includes a driver for NVIDIA cards named: nv.

If you want hardware accelerated 3D graphics, you need to install an external driver package from NVIDIA. I have downloaded the 53.28 version of the NVIDIA Linux driver that seems to work decently.

Installing it is straight forward. First, you need to give the installer file executable permissions:

chmod 755 NVIDIA-Linux-x86-1.0-5328-pkg1.run

then you can run the installer as follows:

./NVIDIA-Linux-x86-1.0-5328-pkg1.run

When the installer asks to download a precompiled kernel module, simply deny and let the installer compile it.

After the installation is complete, we must make sure that the kernel module loads on startup, by adding the following line to /etc/rc.d/rc.local:

# Load video driver module
modprobe nvidia

We must also update the configuration XFree86 to use the new NVIDIA driver. We should open its configuration: /etc/X11/XF86Config in a text editor and change the following line:

Driver    "nv"

into:

Driver    "nvidia"

and enable OpenGL integration by uncommenting the following line:

# Load "glx"

USB peripheral support


In order to use USB peripherals, we must load a number of kernel modules at startup. I have added a number additional lines to /etc/rc.d/rc.local to do this.

Loading the following module enables support for my UHCI-based USB host controller:

modprobe usb-uhci

The following module enables USB storage support so that I can my USB memory sticks:

modprobe usb-storage

Adding the following lines allow me to use my Sidewinder USB joypad:

modprobe hid
modprobe joydev

To allow SDL applications to use my joypad I need to configure the SDL_JOYSTICK_DEVICE environment variable to refer to the joypad's device file. I have created the following file (/etc/profile.d/joystick.sh) to make sure that it happens at boot time:

SDL_JOYSTICK_DEVICE=/dev/input/js0
export SDL_JOYSTICK_DEVICE

APM support


Another subtle annoyance is that my system does not power itself off when I give the shutdown instruction. This problem can be fixed by loading the APM module at startup by adding the following line to /etc/rc.d/rc.local:

# Load APM module
modprobe apm

Configuring storage devices


My retro PC has support for quite a few storage devices: it has a GoTek floppy emulator, a traditional floppy drive, a DVD-ROM and two USB connectors that can work with all kinds of USB storage devices.

To use them on Slackware, I need to update my /etc/fstab so that I can easily mount them:

/dev/fd0     /mnt/floppy0    auto      rw,user,noauto
/dev/fd1     /mnt/floppy1    auto      rw,user,noauto
/dev/cdrom   /cdrom          iso9660   ro,user,noauto
/dev/sda     /mnt/usb        auto      rw,user,noauto

I must also create the missing mount point directories:

mkdir -p /mnt/floppy0 /mnt/floppy1 /mnt/usb

Experience


My Slackware installation works as expected. I can conveniently run the KDE 2.1 desktop and many applications, such as classic versions of the GIMP, Mozilla Suite, and Star Office:


I can also play a number of interesting free/open-source games, such as Tuxracer and Supertux 0.1.3:


What was challenging is that I had to compile all these games, including many of their dependencies (e.g. libxml2, SDL, smpeg, SDL_mixer, SDL_image etc.), from source code.

In the late 90s, early 00s, many software projects did not provide prebuilt packages for Linux distributions. Especially if you were using a non-mainstream Linux distribution, such as Slackware, it was a common habit that you had to build things yourself.

And of course, I was able to run a number of commercial games, such as those released by Loki Games: Unreal Tournament and Quake 3:


Windows XP


The third software configuration I looked into was Windows XP.

Windows XP was an interesting milestone for Microsoft. It was the first Windows version in the Windows NT-product line that was considered suitable for consumer use in addition to business use. With the release of Windows XP, the Windows 9x product-line came to an end. It became available on the market late 2001.

A couple of months earlier, I started my computer science studies and I had quite a few classmates that were enthusiastic about it.

By then I was already using Linux as my primary operating system on my home computer for a while. I could still dual boot Windows Millennium Edition -- I made the unfortunate decision to upgrade my 98 SE installation, which I regretted.

At first, I was not very motivated to try Windows XP. When Windows XP was released, I already knew that it had some huge advantages over the Windows versions in the Windows 9x product-line, thanks to its NT-heritage:

  • It is much more robust. For example, a misbehaving program in Windows 9x could easily render your entire system unstable even after terminating it. With Windows XP, the impact of misbehaving programs is more limited.
  • Other operating system components are more efficient and reliable. In Windows 98 SE, I frequently used a program called MemTurbo to optimize memory usage. In Windows 98 SE, I often lose quite a bit of allocatable RAM due to memory fragmentation. In Windows XP, memory management is much more efficient.
  • Its MS-DOS compatibility has improved over its predecessors in the Windows NT-product line. The Windows NT product-line also offers some degree of MS-DOS compatibility, but it is not as good as its Windows 9x counter parts. For example, in Windows 2000 and older, it was not possible to have any sound in MS-DOS applications. In Windows XP, Sound Blaster 2.0 emulation was added.

Although Windows XP had a number of improvements over the products in the Windows 9x product-line and previous editions of Windows in the NT-product line, I also noticed that it had a number of drawbacks for me:

  • It requires much more system resources. Windows 98 SE roughly requires 300 MiB of free disk space, but the requirements of Windows XP are much more substantial: you need at least 1.5 GiB of free disk space (even more if you want to upgrade to Service Pack 1 and 3).
  • Windows XP also requires much more RAM. Windows 98 SE already works with 16 MiB of RAM. In 2002, I read that the minimum requirement for Windows XP is 128 MiB, which was the total amount of RAM in my machine early 2002 (by 1999 standards this was considered to be a substantial amount of RAM). Later I learned that the minimum amount was downgraded to 64 MiB, but I believe you need to tune down certain visual effects to have a usable system.
  • Although MS-DOS compatibility is better that its predecessors in the Windows NT product-line, it is considerably worse than the Windows 9x product-line counterparts.

    For example, some DOS games I used to play run considerably slower, such as DOOM. Moreover, in some MS-DOS applications the sound is not optimal (or non-working) and some applications will not work at all (for example, those that will not work with an EMM driver).
  • Windows XP requires product activation. In order to use it, you had to activate your installation over the Internet or make a phone call to Microsoft to request a new activation key by providing your serial number and hardware key.

Although I was not too happy with the drawbacks of Windows XP, I eventually decided to install it anyway (as a dual boot option next to my Linux installation), because I considered it to be a huge improvement over Windows Millennium Edition.

Because Windows XP is an important development milestone and it allowed me to use some newer applications that are not supported on Windows 98 SE, I have also decided to create a Windows XP configuration for my retro PC.

Configuration challenges


Compared to my Windows 98 SE installation, in my Windows XP installation much more of my hardware was supported out of the box, such as my network card and USB storage support. Nonetheless, I ran into a number of configuration challenges.

Virtual memory / swap file issues


After successfully installing Windows XP, the first issue I ran into is that I was frequently seeing an error reporting that there is "not enough virtual memory". This error has quite an impact on the stability of the system and applications. Eventually, after some searching I discovered that no swapfile.sys file was created on my C: drive, explaining the lack of virtual memory.

After doing a search on the Internet, I discovered the root cause -- Windows XP considers my CF2IDE device a removable disk drive. Although Windows XP allows itself to be installed on a removable device, it does not allow such devices to be used to store swap files on.

I found a driver package, named: diskmod to work around this problem. I can install this driver by right clicking on the diskmod.inf file and selecting the option: Install. Then I can run the UFDasHDD.bat script to change all removable disk devices into hard drives.

After rebooting the system, my problem was solved -- my CF2IDE device is treated as an ordinary hard disk device and the missing swapfile is created.

Graphics card driver


Windows XP includes a driver for my Diamond Viper V770 card (containing a RIVA TNT2 chipset) supporting 2D graphics out of the box.

To use 3D graphics, I need to install an external driver package from NVIDIA. As with the previous operating systems, I did not use the latest version -- installing version 67.66 gives me no problems running the 3D applications and games that I want.

Sound card driver


For my sound card, I followed almost the same procedure as in Windows 98 SE. The only difference is that in Windows XP it is recommended to use the WDM driver.

Furthermore, because Windows XP is based on Windows NT (that was developed from scratch, rather than using Microsoft's MS-DOS legacy), it makes no sense to install the DOS drivers.

Using CD-ROM and DVD-ROM images


Similar to my Windows 98 SE installation, I can use Daemon Tools to mount CD-ROM and DVD-ROM images. I am using the exact same version (3.47) because it is still small / light weight.

Tools to fix DOS compatibility issues


Earlier, I explained that compatibility with MS-DOS is not optimal. There are some tools that can be used to fix certain kinds of compatibility problems.

To get sound support in some applications (e.g. Prince of Persia) or better sound card support (e.g. Sound Blaster 16), I have installed VDMSound. You can run the following TSR to get improved sound support:

DOSDRV

Although VDMSound may improve the sound experience for some DOS applications, I have also learned that its emulation performance is worse than the Sound Blaster 2.0 emulation that Windows XP provides.

Some games that use VESA graphics modes (most notably BUILD-engine games, such as Duke3D and Shadow Warrior) may refuse to start. By running NOLFB in advance, these games may work again.

Experience


Windows XP does not offer much advantages over Windows 98 SE for playing games -- most of my Windows games still work, but there are no games that perform significantly better. Furthermore, there are some DOS games that will no longer work.

I consider the biggest advantage of using Windows XP (beyond better stability) that I can run newer applications. Some of my prominent examples are Paint.NET version 2.72 (which I find interesting: it was a .NET application released as free and open-source software at a time that Microsoft was still very much opposed to the whole idea) and Sibelius 4.0:


MS-DOS 6.22 with Windows 3.1


Because using Compact Flash cards is so convenient to try out many kinds of configurations, I have also decided to create an MS-DOS 6.22 installation.

Although I have used MS-DOS 6.22 (and a number of older versions) on various kinds of machines including other people's PC as well as the emulated PC environment on my Amiga 500, I have never installed it on my first PC because there was no reason to.

However, since Compact Flash cards are cheap and I can conveniently switch them, I have also decided to set up an MS-DOS configuration.

DOS configuration challenges


Doing a clean MS-DOS installation is straight forward by following the steps in the installer. In addition, I had to perform the same kinds of DOS-related configuration steps as in my Windows 98 SE installation:

  • Installing the cutemouse driver
  • Loading the CD-ROM driver and mounting the CD-ROM drive.
  • Optimizing free conventional memory with MEMMAKER.

There were a couple of additional challenges:

Creating partitions


MS-DOS 6.22 does not support FAT32 filesystem partitions. The best it can do is FAT16, which has a maximum storage limit of 2 GiB. The smallest Compact Flash card I could buy is 4 GiB. Fortunately, it is possible to create two partitions of 2 GiB each, still allowing me to use all of the card's storage capacity.

Sound card support


My Sound Blaster Audigy 2 CD-ROM does not include an installer for MS-DOS. However, I discovered that somebody in the retro computing community has developed an DOS specific configuration pack (by extracting the DOS portions from the Windows 9x installer) and released it on VOGONS.

Installing this DOS pack is easy -- I simply need to unpack it into the root directory of my C: partition.

I had to make a number of subtle changes. First, I need to edit the batch script that initializes my soundcard (C:\AUDIGY2\LIVEINIT.BAT) and make the following modifications:

  • Change the paths from: c:\sblive to C:\AUDIGY2.
  • Uncomment the line that loads audigy12.exe executable
  • The SET BLASTER instruction must match my settings. For me it is: SET BLASTER=A220 I5 D1 H5 P330 T6.

I also had to update the configuration file (C:\AUDIGY2\CTSYN.INI) with the correct IRQ:

SBIRQ=5

As explained earlier in this blog post, the Sound Blaster 16 emulation TSR requires an EMM driver to get access to the Upper Memory Block (UMB). If you have not yet configured it, you can add the following line to C:\CONFIG.SYS:

DEVICE=C:\DOS\EMM386.EXE NOEMS

To allow the sound card to be initialized on startup, I have added to following line to the early stages of AUTOEXEC.BAT:

CALL C:\AUDIGY2\LIVEINIT

If your AUTOEXEC.BAT loads TSRs, such as SMARTDRV.EXE or MSCDEX.EXE, then move them so that they are invoked after the initialization of the sound card. According to the documentation, certain processes may conflict with the soundcard initialization.

Windows 3.1 configuration challenges


In addition to MS-DOS 6.22, I have installed Windows 3.1, which was still a separate product from MS-DOS back in the early 90s. Moreover, Windows 3.1 was not an operating system, but a graphical shell running on top of MS-DOS.

Installing it was straight forward, but there were a number of things I could do to improve the configuration.

Installing my graphics card driver


By default, Windows 3.1 uses a 640x480 resolution with 16 colors for displaying graphics. If you want to use better screen modes (such as higher resolutions and more colors), it is recommended to install a driver for your graphics card.

Fortunately, there is a Windows 3.1x driver for NVIDIA RIVA TNT2 video cards.

Installing it was straight forward by unzipping the driver package into a temp directory (e.g. C:\TEMP\NV) and running the following steps:

  • Main -> Windows Setup
  • Options -> Change System Settings...
  • Display: Other display (Requires disk from OEM)...
  • Pick: NVidia TNT (640x480x256, small font)

When it asks for the driver disk location, provide the location of the temp directory (C:\TEMP\NV). It may also ask you to insert a disk named: ".". In this case answering: C:\WINDOWS\SYSTEM did the job for me.

Sound support


Dealing with sound support in Windows 3.1 was the most tricky configuration aspect. Similar to MS-DOS, there is no Windows 3.1 driver for my sound card. I thought this would not be a big issue, because I have DOS TSR that emulates a Sound Blaster 16.

Unfortunately, installing the Windows 3.1 Sound Blaster 16 driver did not work. I also tried the Sound Blaster 1.5 or 2.0 drivers, but none of them worked. As far as I can see, none of these drivers are compatible with my emulated SB16 device.

Moreover, I also believe it is difficult to support integration with the emulated DOS driver because Windows 3.1 runs in 386 enhanced mode by default, in which virtual memory is used. It may not be able to communicate with the emulate DOS device at all.

I gave up searching for a solution and decided to install the PC speaker driver.

I can install it by executing the following steps:

  • Main -> Control Panel -> Drivers
  • Click on: Add
  • Select: Unlisted or Updated driver
  • Insert PC speaker driver disk
  • Provide location: A:\
  • Select driver: Sound Driver for PC-speaker
  • Enable: Enable interrupts during playback

The PC speaker driver at least gives me some sound, but I cannot, for example, play any MIDI files.

Experience


I can run many DOS applications and games, such as QuickBASIC, Lotus 1-2-3, Historic and Duke Nukem 2:



Although many applications work, my sound experience is not optimal -- some applications crash if I enable sound effects, such as DOOM (Sound Blaster or General MIDI music works fine though). If you want to have an optimal pure MS-DOS experience, I recommend using an ISA soundcard.

Windows 3.1 works -- I can run all kinds of applications and games, such as Minesweeper and Visual Basic 3.0:


Due to not having a proper sound card driver, I cannot use most multimedia applications, such as Windows Media Player to play MIDI files.

Conclusion


In this blog post, I have described how I have built a retro PC allowing me to run the same kinds of applications and games that I frequently used in the late 90s, early 00s.

I have produced four kinds of software configurations (running four different kinds of operating systems) to demonstrate how the applications that I find interesting can be used. In addition, it is a nice machine to run some future projects on.

Some readers that know me well would probably ask me why I have only produced one Linux configuration (Slackware 8.0) and why I have not tried any of the other Linux distributions that I have mentioned, such as SuSE and Red Hat.

Although I appreciated Linux and Slackware was initially working out for me, I was never fully satisfied with any Linux distribution. Moreover, I also did a substantial amount of customization work on all kinds of Linux systems. To make that kind of work doable, I also worked on a custom developed automated solution. This is an interesting story for a future blog post.

Acknowledgments


I am very thankful to the efforts of all kinds of people in retro computing community. Without their efforts, it would be very difficult to do all this stuff.