Saturday, December 30, 2023

13th annual blog reflection

Today, it is the 13th anniversary of my blog. As usual, this is a nice opportunity to reflect over last year's writings.

Similar to 2022, 2023 was not a very productive year compared to the years before -- I am still a bit in a state of recovery, because of the pressure that I was exposed to two years ago. Another reason is that a substantial amount of my spare time is spend on voluntary work for the musical society that I am a member of.

Web framework


I have been maintaining the website for the musical society for quite a few years and it is one of last applications that still use my custom web framework. Thanks to my voluntary work, I made a new feature addition to the layout framework: generating dynamic menus by using an HTML representation of a site map as a basis, such as a mobile navigation/hamburger menus and dropdown menus.

I never liked these kinds of menus very much, but they are quite commonly used. In particular, on mobile devices, a web application feels weird if it does not provide a mobile navigation menu.

The nice thing about using a site map as a basis is that web pages are still able to degrade gracefully -- when using a text-oriented browser or when JavaScript is disabled (JavaScript is mandatory to create an optimal mobile navigation menu), a web site remains usable.

Nix development


Last year, I explained that I had put my Nix development work on hold. This year, I still did not write any Nix-related blog posts, but I have picked up Nix development work again and I am much more active in the community.

I have visited NixCon 2023 and I had a great time. While I was at NixCon, I have decided to pick up my work for the experimental process management framework from where I left it behind -- I started writing RFC 163 that explains its features so that they can be integrated into the main Nixpkgs/NixOS distribution.

Writing an RFC was already on my TODO list for two years, and I always had the intention integrate the good ideas on this framework into Nixpkgs/NixOS so that the community can benefit from it.

The RFC is still being discussed and we are investigating some of the raised questions and concerns.

Research papers


I have also been sorting files on my hard drive, something that I commonly do at the end of the year. The interesting thing is that I also ran into research papers that I collected in the last sixteen years.

Since reading papers and maintaining your knowledge is quite important for researchers and not something that is easy to do, I wrote a blog post about my experiences.

Retro computing


Another area that I worked on is retro computing. I finally found the time to get all my old 8-bit Commodore machines (a Commodore 64 and 128) working in the way they should. The necessary repairs were made and I have ordered new and replacement peripherals. I wrote a blog post that shows how I have been using these 8-bit Commodore machines.

Conclusion


Next year, I intend to focus myself more on Nix development. I already have enough ideas the I am working on, so stay tuned!

The last thing I would like to say is:


HAPPY NEW YEAR!!!

Thursday, December 28, 2023

Using my Commodore 64 and 128 in 2023


Two years ago, I wrote a blog post about using my Commodore Amiga 500 in 2021 after not having touched it in ten years. Although the computer was still mostly functional, some peripherals were broken.

To fix my problems, I brought it to the Home Computer Museum in Helmond for repairs.

Furthermore, I have ordered replacement peripherals so that the machine can be more conveniently used, such as a GoTek floppy emulator. The GoTek floppy emulator makes it possible to conveniently use disk images stored on an USB memory stick as a replacement for physical floppy disks.

I also briefly mentioned that I have been using my first computer: a Commodore 128 for a while. Moreover, I also have a functional Commodore 64, that used to be my third computer.

Although I have already been these 8-bit machines on a more regular basis since 2022, I was not satisfied enough yet to write about them, because there were still some open issues, such as a broken joystick cable and the unavailability of the 1541 Ultimate II cartridge. The delivery took a while because it had to be redesigned/reproduced due to chip shortages.

A couple of weeks ago, the cartridge was finally delivered. In my Christmas holiday, I finally found the time to do some more experiments and write about these old 8-bit Commodore machines.

My personal history


The Commodore 128, that is still in my possession, originally belonged to my parents and was the first computer I was exposed to. Already as a six year old, I knew the essential BASIC commands to control it, such as requesting a disk's contents (e.g. LOAD"$",8: LIST), loading programs from tape and disk (e.g. LOAD, LOAD"*",8,1) and running programs (RUN).


One of my favorite games was the Commodore 64 version of Teenage Mutant Ninja Turtles developed by Ultra Software, as can be seen in the above screenshot.

I liked the game very much because I was a fan of the TV show, but it was also quite buggy and notoriously difficult. Some parts of the game were as good as impossible to finish. As a result, I was never able to complete the game, despite having played it for many hours.

Many relatives of mine used to have an 8-bit Commodore machine. A cousin and uncle used to own a Commodore 64C, and another uncle owned a Commodore 128. We used to exchange ideas and software quite a lot.


At first, I did not know that a Commodore 128 was a more capable machine than an ordinary Commodore 64. My parents used to call it a Commodore 64, and for quite some time I did not know any better.

The main reason behind the confusion is that a Commodore 128 is nearly 100% backwards compatible with a Commodore 64 -- it contains the same kinds of chips and it offers a so-called Commodore 64 mode.

You can switch to Commodore 64 mode by holding the Commodore logo key on bootup or by typing: GO64 on the command prompt. When a utility cartridge is inserted, the machine always boots in Commodore 64 mode. The picture above shows my Commodore 128 running in Commodore 64 mode.

At the time, we had a utility cartridge inserted into the cartridge socket that offered fast loading, preventing us from seeing the Commodore 128 mode altogether. Moreover, with the exception of the standard software that was bundled with the machine, we only had Commodore 64 software at our disposal.

In 1992, I wrote my first BASIC program. The program was very simple -- it changes the colors of the text, screen and screen border, asks somebody to provide his name and then outputs a greeting.


At some point, by accident, the utility cartridge was removed and I discovered the Commodore 128 mode, as can be seen in the picture above. I learned that the Commodore 128 ROM had a more advanced BASIC version that, for example, also allows you to play music with the PLAY command.


I also discovered the CP/M disk that was included with the machine and tried it a few times. It looked interesting (as can be seen in the picture above) but I had no applications for it, so I had no idea what to do with it. :)

I liked the Commodore 128 features very much, but not long after my discovery, my parents bought a Commodore Amiga 500 and gave the Commodore 128 to another uncle. All my relatives that used to have an 8-bit Commodore machine already made the switch to the Amiga, and we were the last to make the transition.

Although switching to a next generation machine may sound exciting, I felt disappointed. In the last year that the Commodore 128 was still our main machine, I learned so much, and I did not like it that I was no longer be able to use the machine and learn more about my discoveries. Fortunately, I could still play with the old Commodore 128 once in a while when we visited that uncle that we gave the machine to.


Some time later, in late 1993, my parents gave me a Commodore 64 (the old fashioned breadbin model) that they found at a garage sale (as shown in the picture above). This was the third computer model that I was exposed to and the first computer that was truly mine, because I did not have to share it with my parents and brother. This machine gave me my second 8-bit Commodore experience, and I have been using this old machine for quite some time, until mid-1997.

Originally, the Commodore 64 did not come with any additional peripherals. It was just the computer with a cassette drive and no utility cartridges for fast loading. I had a cassette with some games and a fast loading program that was the first program on the tape. Nothing more. :)

I was given a few books and I picked up Commodore 64 programming again. In the following years, I learned much more about programming and the capabilities of the Commodore 64, such as for-loops, how to do I/O, how to render sprites and re-program characters.

I have also been playing around with audio, but my sound and music skills were far too limited to do anything that makes sense. Moreover, I did quite a few interesting attempts to create games, but nothing truly usable came out of it. :)


In 1994, I bought a 1541 disk drive and several utility cartridges at a garage sale, such as the Final Cartridge (shown above). The Final Cartridge provides all kinds of useful features, such as fast loading and the ability to interrupt the machine and inspect its memory with a monitor.

Owning a disk drive also allowed me to make copies of the games that I used to play on my parents' Commodore 128.

Eventually, in 1998 I switched to the Commodore Amiga 500 as my personal machine, but I kept my Commodore 64. In 1998, Commodore was already out of business for four years and completely lost its relevance. My parents bought a PC in 1996. After using the Amiga for a while on the attic, the Amiga's display broke rendering it unusable. In 1998, I discovered how to attach the Amiga to a TV.

In late 1999, I was finally able to buy my own PC. I kept the Amiga 500, because I still considered it a cool machine.

Several years later, the Commodore 128 was returned to me. My uncle no longer had any use for it and was considering to throw it away. Because I still remembered its unique features (compared to a regular Commodore 64), I have decided to take it back.

Some facts


Why is the Commodore 64 such an interesting machine? Besides the fact that it was the first machine that I truly owned, it has also been listed in the Guinness World Records as the highest-selling single computer model of all time.

Moreover, it also has interesting capabilities, such as:

  • 64 KiB of RAM. This may not sound very impressive for nowadays' standards (for example, my current desktop PC has 64 GiB of RAM, a million times as much :) ), but in 1982 this used to be a huge amount.
  • A 6510 CPU, which is a modified 6502 CPU that has an 8-bit I/O port added. On machines with a PAL display, it runs at a clock speed slightly under 1 MHz.
    Compared to modern CPUs, this may not sound impressive (a single core of my current CPU runs at 3.7 GHz :) ), but in the 80s the CPU was quite good -- it was cheap and very efficient.

    Despite the fact that there were competing CPUs at the time that ran at higher clock speeds, most 6502 instructions only take a few cycles and fetch the next instruction from memory while the previous instruction is still in execution. As a result, it was still a contender to most of its competitors that ran at higher clock speeds.
  • A nice video chip: the VIC chip. It supports 16 preconfigured colors and various screen modes, including high resolution screen modes that can be used for addressing pixels. It also supports eight hardware managed sprites -- movable objects directly controlled by the video chip.
  • A nice sound chip: the SID chip. It offers three audio channels, four kinds of waveforms (triangle, sawtooth, squarewave and white noise) and analog mixing. This may not sound impressive, but at the time, the fact that the three audio channels can be used to mix waveforms arbitrarily was a very powerful capability.
  • An operating system using a BASIC programming language interpreter (Commodore BASIC v2) as a shell. In the 70s and 80s, BASIC was a very popular programming language due to its simplicity.

Other interesting capabilities of the Commodore 64 were:

  • The RAM is shared between the CPU and other chips, such as the VIC and SID. As a result, the CPU is offloaded to do calculation work only.
  • The CPU's clock speed is aligned with the video beam. The screen updates 50 times per second and the screen is rendered from top to bottom, left to right. Each screen block takes two cpu cycles to render.

    These properties make it possible to change the screen while it is rendered (a technique called racing the beam). For example, while the screen is drawn, it is possible to adjust the colors in color RAM, multiplexing sprites (by default you can only configure eight sprites), changing screen modes (e.g. from text to high res etc).

    For example, the following screenshot of a computer game called: Mayhem in Monsterland demonstrates what is possible by "racing the beam". In the intro screen (that uses multi-color bitmap mode), we can clearly see more colors per scanline than three unique colors and a background color per 8x8 block, that is normally only possible in this screen mode:

  • And of course, the Commodore 64 has a huge collection games and applications.

The Commodore 128 has similar kinds of chips (as a result, it is nearly 100% compatible with the Commodore 64).

It has the following changes and additions:

  • Double the amount of RAM: 128 KiB
  • A second video chip: the VDC chip, that can render 80-column text and higher resolution graphics, but no sprites. To render 80-column output, you need to connect an RGBI cable to a monitor that is capable of displaying 80-column graphics. The VDC chip does not work with a TV screen.
  • A CPU that is twice as fast: the 8502, but entirely backwards compatible with the 6510. However, if you use the VIC chip for rendering graphics (40 column mode) the chip still runs at half its speed, which is the same as the ordinary 6510. In 80-column mode or when the screen is disabled, it runs at twice the speed of a 6510.
  • A second CPU: the Zilog Z80. This CPU is used after booting the machine in CP/M mode from the CP/M boot disk.
  • An improved BASIC interpreter that supports many more features: Commodore BASIC v7.0

Using the Commodore machines


To conveniently use the Commodore machines in 2023, I have made a couple of repairs and I have ordered new peripherals.

Power supplies



I bought new power supplies. I learned that it is not safe to use the original Commodore 64 power supply as it gets older -- it may damage your motherboard and chips:

In a nutshell, the voltage regulator on the 5 volt DC output tends to fail in such a way that it lets a voltage spike go straight to your C64 motherboard, frying the precious chips.

Fortunately, modern replacement power supplies exist. I bought one from: c64lover.com that seems to work just fine.

I also bought a replacement power supply for the Commodore 128 from Keelog. The original Commodore 128 power supply is more robust than the Commodore 64 power supply, but I still want some extra safety.

Cassette drive



As I have already explained, my Commodore 64 breadbin model only included a cassette drive. I still have that cassette drive (as shown in the picture above), but after obtaining a 1541 disk drive, I never used it again.

Two years ago, I ordered an SD2IEC that included a bonus cassette with a game: Hi-Score. I wanted to try the game, but it turned out there was a problem with the cassette drive -- it seems to spin irregularly.

After a brief investigation I learned that the drive belt was in a bad condition. I have ordered a replacement belt from Ebay. Installing it was easy, and the game works great:


Disk drives



I have two disk drives. As I have already explained, I have a 1541 drive that I bought from a garage sale for my Commodore 64. The pictures above show the exterior and interior of the disk drive.

The disk drive still works, but I had a few subtle problems with running modern demos that concurrently load data while playing the demo. Such demos would sometimes fail (crash or the sound starts to run out of sync with the rest of the demo), because of timing problems.

I have cleaned the disk drive head with some alcohol and that seemed to improve the situation.


I also have a 1571 disk drive that came with the Commodore 128. The 1571 disk drive is a more advanced disk drive, that is largely backwards compatible with the 1541. The pictures above show the exterior and interior of the drive.

In addition to ordinary Commodore 64 disks, it can also read both sides of a floppy disk at the same time and use disks that are formatted to be as such. It can also run in MPM mode to read CP/M floppy disks.

My 1571 disk drive still seems to work fine. The main reason why I did not have to clean it is because I have not used it as much as the 1541 disk drive.

The 1541 and 1571 disk drives are interesting devices. They are computer systems themselves -- each of them contains two embedded machines each having their own 6502 CPUs. One sub system is responsible for managing filesystem operations and the communication with the main computer, while the other sub system is used for controlling the drive.

The 1541 disk drive contains 2 KiB of RAM and runs its own software from a ROM chip that provides the Commodore Disk Operating System.

Technically, using a disk drive on an 8-bit Commodore machine is the same as two computer systems communicating with each other over Commodore's proprietary serial interface (the IEC).

Monitor



I also have a monitor for the Commodore 128 machine: the Commodore 1901 that is capable of displaying graphics in 40 and 80 column modes. It has an RGBI socket for 80-column graphics and RCA sockets for 40-column graphics. I need to use a switch on the front panel to switch between the 40 and 80 column graphics modes. In the picture shown above, I have switched the monitor to 80-column mode.

The monitor still works fine, but in 2019 a capacitor burned out, causing smoke to come out of the monitor, which was a scary experience.

Fortunately, the monitor was not irreparably damaged and the home computer museum managed to replace the broken capacitors. After it was returned to me, it seems to work just fine again.

In 1992, besides "The Very First": a programming tutorial and CP/M, I did not have any software for the Commodore 128. In the recent years I have downloaded a few interesting applications for the Commodore 128, such as a Tetris game that works in 80-column mode:


Joysticks


As already shown in earlier pictures, I am using The Arcade joysticks produced by Suzo International. I have four of them.

Unfortunately, three of them did not work properly anymore because their cables were damaged. I have managed to have them repaired by using this cable from the Amiga shop as a replacement.

Mouse


The Commodore 128 also came with a 1351 mouse (a.k.a. tank mouse), but it was lost. I never used a mouse much, except for GEOS: a graphical operating system.

To get that GEOS experience back, I first bought an adapter device that allows you to use a PS/2 mouse as a 1351 mouse. Later, I found the same 1351 mouse model on Ebay:


SD2IEC


I have also been looking into more convenient ways to use software that I have downloaded from the Internet. Transferring downloaded disk images from and to physical floppy disks is possible, but quite inconvenient.

The SD2IEC is a nice modern replacement for a 1541 disk drive -- it is cheap, it can be attached to the IEC port and all high-level disk commands seem to be compatible. Physically, it also looks quite nice:


As can be seen it the picture above, it looks very similar to an ordinary 1541 disk drive.

Unfortunately, low-level disk-drive operations are not supported -- some applications need to carry out low-level operations for fast loading, such as demos.

Nonetheless, the SD2IEC is still a great solution, because there are plenty of applications and games that do not require any fast loading capabilities.

1541 Ultimate II cartridge


After happily using the SD2IEC for a while, I wanted better compatibility with the 1541 disk drive. For example, many modern demos are not compatible, and I do not have enough physical floppy disks to store these demos on.

As I have already explained, this year, I have ordered the 1541 Ultimate II cartridge, but it took a while before it could be delivered.

The 1541 Ultimate II cartridge is an impressive device -- it can be attached to the IEC port and the tape port (by using a tape adapter) and provides many interesting features, such as:

  • It can cycle-exact emulate two 1541 disk drives
  • It offers two emulated SID chips
  • It can load cartridge images
  • It simulates disk drive sounds
  • You can attach up to two USB storage devices. You can load disk images and tape images, but you can also address files from the file system directly.
  • It has an ethernet adapter.


The above two pictures demonstrate how the cartridge works and how it is attached to the Commodore 64.

I am very happy that I was able to run many modern demos developed by the Demoscene community, such as: Comaland by Oxyron and Next level by Performers etc. on a real Commodore 64 without using physical floppy disks:


UPDATE June 2024: I learned that it is also possible to use the 1541 Ultimate II cartridge in CP/M mode on the Commodore 128. Previously, I have noticed that it is possible to boot a CP/M disk image on a Commodore 128. However, in CP/M mode, the computer freezes when you press the menu button on the cartridge making it impossible for me to switch disk images.

To cope with this problem, it is also seems to be possible to control the cartridge with a special piece of software called: CPMUTools. By enabling the command-interface and the RAM extension with 16 MiB of RAM, it is possible to mount floppy disks by using a CP/M program called UMOUNT and integrate with the operating system configuration (such as the clock) with a tool called UCONFIG.

To conveniently use CP/M with the 1541 Ultimate cartridge, I enable a secondary simulated 1581 disk drive (B drive) that I use to mount the CPMUTools disk to. In CP/M, I can start the disk utility as follows:

B:
UMOUNT

showing me the following program allowing me to pick disk images:


By using CPMUTools, it has become possible for me to try out CP/M software on a real Commodore 128, such as Turbo Pascal, something that I have not been able to do in the last thirty years:


ZoomFloppy


Sometimes I also want to transfer data from my PC to physical floppy disks and vice versa.

For example, a couple of years ago, I wanted to make a backup of the programs that I wrote when I was young.

In 2014, I have ordered a ZoomFloppy device to make this possible.

ZoomFloppy is a device that offers an IEC socket to which a Commodore disk drive can be connected. As I have already explained, Commodore disk drives are self-contained computers. As a result, linking to the disk drive directly suffices.

The ZoomFloppy device can be connected to a PC through the USB port:


The above picture shows how my 1541 disk drived is connected to my ThinkPad laptop by using the ZoomFloppy device. If you carefully look at the screen, I have requested an overview of the content of the disk that is currently in the drive.

I use OpenCBM on my Linux machine to carry out disk operations. Although graphical shells exist (for example, the OpenCBM project provides a graphical shell for Windows called: cbm4wingui), I have been using command-line instructions. They may look scary, but I learned them quite quickly.

Here are some command-line instructions that I use frequently:

Request the status of the disk drive:

$ cbmctrl status 8
73,speeddos 2.7 1541,00,00

Format a floppy disk (with label: mydisk and id: aa):

$ cbmformat 8 "mydisk,aa"

Transfer a disk image's contents (mydisk.d64) to the floppy disk in the drive:

$ d64copy mydisk.d64 8

Make a D64 disk image (mydisk.d64) from the disk in the drive:

$ d64copy 8 mydisk.d64

Request and display the directory contents of a floppy:

$ cbmctrl dir 8

Transfer a file (myfile) from floppy disk to PC:

$ cbmcopy --read 8 myfile

Transfer a file (myfile) from PC to floppy disk:

$ cbmcopy --write 8 myfile

Conclusion


In this blog post, I have explained how I have been using my old 8-bit Commodore 64 and 128 machines in 2023. I made some repairs and I have ordered some replacement peripherals. With these new peripherals I can conveniently run software that I have downloaded from the Internet.

Thursday, December 21, 2023

On reading research papers and maintaining knowledge

Ten years ago I have obtained my PhD degree and made my (somewhat gradual) transition from academia to industry. Despite the fact that I made this transition a long time ago, I still often get questions from people who are considering doing a PhD.

Most of the discussions that I typically have with such people are about writing -- I have already explained plenty about writing in the past, including a recommendation to start a blog so that writing becomes a habit. Having a blog allows you to break up your work into manageable pieces and build up an audience for your work.

Recently, I have been elaborately reorganizing files on my hard-drive, a tedious task that I often do at the end of the year. This year, I have also been restructuring my private collection of research papers.

Reading research papers became a habit while working on my master's thesis and doing my PhD. Although I have left academia a long time ago, I have retained the habit, although the amount of papers and articles that I read today is much lower than my PhD days. I no longer need to study research works much, but I have retained the habit to absorb existing knowledge and put things into context whenever I intend to do something new, for example for my blog posts or software projects.

In 2020, during the first year of the COVID pandemic, I have increased my interest in research papers somewhat, because I had to revise some of the implementations of algorithms in the Dynamic Disnix framework that were based on work done by other researchers. Fortunately, the ACM temporarily opened their entire digital library to the public for free so that I could get access to quite an amount of interesting papers, without requiring to pay.

In addition to writing, reading in academic research is also very important, for the following reasons:

  • To expand your knowledge about your research domain.
  • To put your own work into context. If you want to publish a paper about your work, having a cool idea is not enough -- you have to explain what your research contributes: what the innovation is. As a result, you need to relate to earlier work and (optionally) to studies that motivate the relevance of your work. Furthermore, you can also not just take credit for work that has already been done by others. As a consequence, you need to very carefully investigate what is out there.
  • You may have to peer review papers for acceptance in conference proceedings and journals.

Reading papers is not an easy job -- it often takes me quite a bit of time and dedication to fully grasp a paper.

Moreover, when studying a research paper, you may also have to dive into related work (by examining a paper's references, and the references of these) to fully get an understanding. You may have to dive several levels deep to gain enough understanding, which is not a straightforward job.

In this blog post, I want to share my personal experiences with reading papers and maintaining knowledge.

My personal history with reading


I have an interesting relationship with reading. Already at young age, I used to study programming books to expand my programming knowledge.

With only limited education and knowledge, I was very practically minded -- I would relentlessly study books and magazines to figure out how to get things done, but as soon as I figured out enough to get something done I stopped reading, something that I consider a huge drawback of the younger version of me.

For example, I still vividly remember how I used to program 2D side scroller games for the Commodore Amiga 500 using the AMOS BASIC programming language. I figured out many basic concepts by reading books, magazines and the help pages, such as how to load IFF/ILBM pictures as backgrounds, load ProTracker modules as background music, using blitter objects for moving actors, using a double buffer to smoothly draw graphics, side scrolling, responding to joystick input etc.

Although I have managed to make somewhat playable games by figuring out these concepts, the games were often plagued by bugs and very slow performance. A big reason that contributed to these failures is because I stopped reading after mastering the basics.

For example, to improve performance, I should have disabled the autoback feature that automatically swaps the physical and logical screens on every drawing command and do the screen swap manually after all drawing instructions were completed. I knew that using a double screen buffer would take graphics glitches away, but I never bothered to study the concepts behind it.

As I grew older and entered middle school, I became more critical of myself. For example, I learned that it is essential to properly cite where you get your knowledge from rather than "pretending" that you are pulling something out of your own hat. :)

Fast forwarding to my studies at the university: reading papers from the academic literature became something that I had to commonly do. For example, I still remember the real-time systems and software architecture courses.

The end goal of the former course was to write your own research paper about a subject in the real-time systems domain. In this course, I learned, in addition to real-time system concepts, how academic research works: writing a research paper is not just about writing down a cool idea (with references that you used as an inspiration), but you also need to put your work into context -- a research paper is typically based on work already done by others, and your paper typically serves as an ingredient that can be picked up by other researchers.

In the latter course, I had to read quite a few papers in the software architecture domain, write summaries and discuss my findings with other students. Here, I learned that reading papers is all but a trivial job:

  • Papers are often densely written. As a result, I get overwhelmed with information and it requires quite a bit of energy from my side to consume all of it.
  • The formatting of many papers is not always helpful. Papers are typically written for print, not for reading from a screen. Also, the formatting of papers are not always good for displaying code fragments or diagrams.
  • There is often quite a bit of unexplained jargon in a paper. To get a better understanding you need to dive deeper into the literature, such as also studying the references of the papers or books that are related to the subject.
  • Sometimes authors frequently use multi-syllable words.
  • It is also not uncommon for authors to use logic and formulas to formalize concepts and mathematically prove their contributions. Although formalization helps to do this, reading formulas is often a tough job for me -- there is typically a huge load of information and Greek symbols. These symbols IMO are not always very helpful to relate to what concepts they represent.
  • Authors often tend to elaborately stress out the caveats of their contributions, making things hard to read.

Despite having read many papers in the last 16 years and I got better at it, reading still remains a tough job because of the above reasons.

In the final year of my master's, I had to do a literature survey before starting the work on my master's thesis. The first time I heard about this, I felt scared, because of my past experiences with reading papers.

Fortunately, my former supervisor: Eelco Visser, was very practically minded about the process -- he wanted us to first work on practical aspects of their research projects, such as WebDSL: a domain-specific language for developing web applications with a rich data model and related tools, such as Stratego/XT and the Nix package manager.

After mastering the practical concepts of these projects, doing a literature survey felt much easier -- instinctively, while using these tools in practice, I became more interested in learning about the concepts behind them. Many of their underlying concepts were described in research papers published my my colleagues in the same research department. While studying these papers, I also got more motivated/interested into diving deeper in the academic literature by studying the papers' references and searching for related subjects in the digital libraries of the ACM, IEEE, USENIX, Springer, Elsevier etc.

During my PhD reading research papers became even more important. In the first six months of my PhD, I had a very good start. I published a paper about an important aspect of my master's thesis: atomic upgrading of the static parts of a distributed system, and a paper about the overall objective of the research project that I was in. I have to admit that, despite having these papers instantly accepted, I still had the wrong mindset -- I was basically just "selling my cool ideas" and finding support in the academic literature, rather than critically studying what is out there.

For my third paper, that covers a new implementation of Disnix (the third major revision to be precise), I learned an important/hard lesson. The first version of the paper got badly rejected by the program committee, because of my "advertising cool ideas mindset" that I always used to have -- I failed to study the academic literature well enough to explain what the innovation of my paper is in comparison to other deployment solutions. As a consequence, I got some very hard criticisms from the reviewers.

Fortunately, they gave me good feedback. For example, I had to study papers from the Working Conference on Component Deployment. I have addressed their criticisms and the revised paper got accepted. I learned what I had to do in the future -- it is a requirement to also study the academic literature well enough to explain what your contribution is and demonstrate its relevance.

This rejection also changed my attitude how I deal with research papers. Previously, after my work for a paper was done, I would typically discard the artifacts that I no longer needed, including the papers that I used as a reference. After this rejection, I learned that I need to build my own personal knowledge base so that for future work, I could always relate to the things that I have read previously.

Reading research papers


I have already explained that for various reasons, reading research papers is all but an easy job. For some papers, in particular the ones in my former research domain: software deployment, I got better as I grew more familiar to the research domain.

Nonetheless, I still sometimes find reading papers challenging. For example, studying algorithmic papers is extremely hard IMO. In 2021, I had to revise my implementations of approximation solutions for the multi-way cut, and graph coloring problems in the Dynamic Disnix framework. I had to re-read the corresponding papers again. Because they were so hard to grasp, I wrote a blog post that explains how I practically applied them.

To fully grasp a paper, reading it a single time is often not enough. In particular the algorithmic papers that I mentioned earlier, I had to read them many times.

Interestingly enough, I learned that reading papers is also a subject of study. A couple of years ago I discovered a paper titled: "How to Read a Paper" that explains a strategy for reading research papers using a three-pass approach:

  • First pass: bird's eye view. Study the title, abstract, introduction, headings, conclusions. A single pass is often already enough to decide whether a paper is relevant to read or not.
  • Second pass: study in greater detail, but ignore the big details, such as mathematical proofs.
  • Third pass: read everything in detail by attempting to virtually re-implement the paper.

After discovering this paper, I have also been using the three pass approach. I have studied most of my papers in my collection in two passes, and some of them in detail in three passes.

Another thing that I discovered by accident is that to extensively study literature, a continuous approach works better for me (e.g. reserving certain timeslots in a week) than just reserving longer periods of time that consist of only reading papers.

Also, regularly discussing papers with your colleagues helps. During my PhD days, I did not do it that often (we had no formal "process" for it) but there were several good sessions, such as a program committee simulation organized by Arie van Deursen, head of our research group.

In this simulation, we organized a program committee meeting of the ICSE conference in which the members of the department represented program committee members. We have discussed submitted papers and voted for acceptance or rejection. Moreover, we also had to leave the room if there was a conflict of interest.

I also learned that Edsger Dijkstra, a famous Dutch computer scientist, organized the ETAC (Eindhoven Tuesday Afternoon Club) and ATAC (Austin Tuesday Afternoon Club) in which amongst other activities, reading and discussing research papers was a recurring activity.

Building up your personal knowledge base


As I have explained earlier, I used to throw away my downloaded papers when the work for a paper was done, but I changed that habit after that hard paper rejection.

There are many good reasons to keep and organize the papers that you have read, even if they do not seem to be directly relevant to your work:

  • As I have already explained, in addition to reading a single paper and writing your own research papers, you need to maintain your knowledge base so that you can put them into context.
  • It is not always easy to obtain papers. Many of them are behind a paywall. Without a subscription you cannot access them, so once you have obtained them it is better to think twice before you discard them. Fortunately, open access becomes more common but it still remains a challenge. Arie van Deursen has written a variety of blog posts about open access.
  • Although many papers are challenging to read, I also started to appreciate certain research papers.

My own personal paper collection has evolved in an interesting way. In the beginning, I just used to put any paper that I have obtained into a single folder called: papers until it grew large enough that I had to start classifying them.

Initially, there was a one-level folder structure, consisting of categories such as: deployment, operating systems, programming languages, DSL engineering etc. At some point, the content of some of these folders grew large enough and I introduced a second level directory structure.

For example, the sub folder for my former research domain: software deployment (the process that consists of all activities to make a software system available for use) contains the largest amount of papers. Currently, I have collected 168 deployment papers that I have divided over the following sub categories:

  • Deployment models. Papers whose main contribution is a means to model various deployment aspects of a system, such as the structure of a system and deployment activities.
  • Deployment planning. Papers whose main contribution are algorithms that decide a suitable/optimal deployment architecture based on functional and non-functional requirements of a system.
  • Empirical studies. Papers containing empirical studies about deployment in practice.
  • Execution. Papers in which the main contribution is executing deployment activities. I have also sub categorized this folder into technology-specific solutions (e.g. a solution is specific to a programming language, such as Java or component technology, such as CORBA) and generic solutions.
  • Practice reports. Papers that report on the use of deployment technologies in practice.
  • Surveys. Papers that analyse the literature and draw conclusions from them.

A hierarchical directory structure is not perfect for organizing papers -- for many papers there is an overlap between multiple sub domains in the software engineering domain. For example, deployment may also be related to a certain component technology, in service of optimizing the architecture of a system, related to other configuration management activities (versioning, status accounting, monitoring etc.) or an ingredient in integration testing. If there is an overlap, I typically look at the strongest kind of contribution that the paper makes.

For example, in the deployment domain, Eelco Dolstra wrote a paper about maximal laziness, an important implementation aspect of the Nix expression language. The Nix package manager is a deployment solution, but the contribution of the paper is not deployment, but making the implementation of a purely functional DSL efficient. As a result, I have categorized the paper under DSL engineering rather than deployment.

The organization of my paper collection is always in motion. Sometimes I gain new insights causing me to adjust the classifications, or when a collection of papers for a sub domain grows, I may introduce a second-level classification.

Some practical tips to get familiar with a certain research subject


So what is my recommended way to get familiar with a certain research subject in the software engineering domain?

I would start by doing something practical first. In software engineering research domain, often the goal is to develop or examine tools. Start by using these tools first and see if you can contribute to them from a practical point of view -- for example, by improving features, fixing bugs etc.

As soon as I have mastered the practical aspects, I may typically already get motivated to dive into their underlying concepts by studying the papers that cover them. Then I will apply the three pass reading strategy and eventually study the references of the papers to get a better understanding.

After my orientation phase has finished, the next thing I would typically look at is the conferences/venues that are directly related to the subject. For software deployment, for example, there used to be only one subject-related conference: the Working Conference On Component Deployment (that unfortunately was no longer organized after 2005). It is typically a good thing to have examined all the papers of the related conferences/venues, by at least using a first-pass approach.

Then a potential next step is to search for "early defining papers" in that research area. In my experience, many research papers are improving on concepts pioneered by these papers, so it is IMO a good thing to know where it all started.

For example, in the software deployment domain the paper: "A Characterization Framework for Software Deployment Technologies" is such an early defining paper, covering a deployment solution called "The Software Dock". The paper comes with a definition for the term: "software deployment" that is considered the canonical definition in academic research.

Alternatively, the paper: "Software Deployment, Past, Present and Future" is a more recent yet defining paper covering newer deployment technologies and also offers its own definition of the term software deployment.

For unknown reasons, I always seem to like early defining papers in various software engineering domains. These are some of my recommendations of early defining papers in other software engineering domains:


After studying all these kinds of papers, your knowledge level should already be decent enough to find your way to study the remaining papers that are out there.

Literature surveys


In addition to research papers that need to put themselves into context, extensive literature surveys can also be quite valuable to the research community. During my PhD, I learned that it is also possible to publish a paper about a literature survey.

For example, some of my former colleagues did an extensive and systematic literature survey in the dynamic analysis domain. In addition to the results, the authors also explain their methodology, that consists of searching on keywords, looking for appropriate conferences and journals and following the papers' references. From these results they have derived an attribute framework and classified all the papers into this attribute framework.

I have kept the paper as a reference for myself, because I like the methodology. I am not so interested in dynamic analysis or program comprehension from a research perspective.

Literature surveys also exist in my former research domain, such as a survey of deployment solutions for distributed systems.

Conclusions


In this blog post, I have shared my experiences with reading papers and maintaining knowledge. In research, it is quite important and you need to take it seriously.

Fortunately, during my PhD I have learned a lot. In summary, my recommendations are:

  • Archive your papers and build up a personal knowledge base
  • Start with something practical
  • Follow paper references
  • Study early defining papers
  • Find people to discuss with
  • Study continuously in small steps

Although I never did an extensive literature survey in the software deployment domain (it is not needed for submitting papers that contribute new techniques) I can probably even write a paper about software deployment literature myself. The only problem is that I am not quite up to date with work that has been published in the last few years, because I no longer have access to these digital libraries.

Moreover, I also need to find the time and energy to do it, if I really want to :)

Tuesday, July 4, 2023

Using a site map for generating dynamic menus in web applications

In the last few weeks, I have been playing around with a couple of old and obsolete web applications that I have developed in the past with my own web framework. Much of the functionality that these custom web applications offer are facilitated by my framework, but sometimes these web applications also contain significant chunks of custom code.

One of the more interesting features provided by custom code is folding menus (also known as dropdown and dropright menus etc.), that provide a similar experience to the Windows start menu. My guess is that because the start menu experience is so familiar to many users, it remains a frequently used feature by many web applications as of today.

When I still used to actively develop my web framework and many custom web applications (over ten years ago), implementing such a feature heavily relied on JavaScript code. For example, I used the onmouseover attribute on a hyperlink to invoke a JavaScript function that unfolds a panel and the onmouseout attribute to fold a panel again. The onmouseover event handler injects a menu section into the DOM using CSS absolute positioning to put it in the right position on the screen.

I could not use the standard menu rendering functionality of my layout framework, because it deliberately does not rely on the usage of JavaScript. As a consequence, I had to write a custom menu renderer for a web application that requires dynamic menu functionality.

Despite the fact that folding menus are popular and I have implemented them as custom code, I never made it a feature of my layout framework for the following two reasons:

  • I want web applications built around my framework to be as declarative as possible -- this means that I want to concisely express as much as possible what I want to render (a paragraph, an image, a button etc. -- this is something HTML mostly does), rather than specifying in detail how to do it (in JavaScript code). As a result, the usage of JavaScript code in my framework is minimized and non-essential.

    All functionality of the web applications that I developed with my framework must be accessible without JavaScript as much possible.
  • Another property that I appreciate of web technology is the ability to degrade gracefully: the most basic and primary purpose of web applications is to provide information as text.

    Because this property is so important, many non-textual elements, such as an image (img element), provide fallbacks (such as an alt attribute) that simply renders alternative text when graphics capabilities are absent. As a result, it is possible to use more primitive browsers (such as text-oriented browsers) or alternative applications to consume information, such as a text-to-speech system.

    When essential functionality is only exposed as JavaScript code (which more primitive browsers cannot interpret), this property is lost.

Recently, I have discovered that there is a way to implement folding menus that does not rely on the usage of JavaScript.

Moreover, there is also another kind of dynamic menu that has become universally accepted -- the mobile navigation menu (or hamburger menu) making navigation convenient on smaller screens, such as mobile devices.

Because these two types of dynamic menus have become so common, I want to facilitate the implementation of such dynamic menus in my layout framework.

I have found an interesting way to make such use cases possible while retaining the ability to render text and degrade gracefully -- we can use an HTML representation of a site map consisting of a root hyperlink and a nested unordered list as a basis ingredient.

In this blog post, I will explain how implementing these use cases are possible.

The site map feature


As already explained, the basis for implementing these dynamic menus is a textual representation of a site map. Generating site maps is a feature that is already supported by the layout framework:


The above screenshot shows an example page that renders a site map of the entire example web application. In HTML, the site map portion has the following structure:

<a href="/examples/simple/index.php">Home</a>

<ul>
    <li>
        <a href="/examples/simple/index.php/home">Home</a>
    </li>
    <li>
        <a href="/examples/simple/index.php/page1">Page 1</a>
        <ul>
            <li>
                <a href="/examples/simple/index.php/page1/page11">Subpage 1.1</a>
            </li>
            <li>
                <a href="/examples/simple/index.php/page1/page12">Subpage 1.2</a>
            </li>
        </ul>
    </li>
    <li>
        <a href="/examples/simple/index.php/page2">Page 2</a>
        ...
    </li>
    ...
</ul>

The site map, shown in the screenshot and code fragment above, consists of three kinds of links:

  • On top, the root link is displayed that brings the user to the entry page of the web application.
  • The unordered list displays links to all the pages visible in the main menu section that are reachable from the entry page.
  • The nested unordered list displays links to all the pages visible in the sub menu section that are reachable from the selected sub page in the main menu.

With a few simple modifications to my layout framework, I can use a site map as an alternative type of menu section:

  • I have extended the site map generator with the ability to mark selected sub pages and as active, similar to links in menu sections. By adding the active CSS class as an attribute to a hyperlink, a link gets marked as active.
  • I have introduced a SiteMapSection to the layout framework that can be used as a replacement for a MenuSection. A MenuSection displays reachable pages as hyperlinks from a selected page on one level in the page hierarchy, whereas a SiteMap section renders the selected page as a root link and all its visible sub pages and transitive sub pages.

With the following model of a layout:

$application = new Application(
    /* Title */
    "Site map menu website",

    /* CSS stylesheets */
    array("default.css"),

    /* Sections */
    array(
        "header" => new StaticSection("header.php"),
        "menu" => new SiteMapSection(0),
        "contents" => new ContentsSection(true)
    ),

    ...
);

We may render an application with pages that have the following look:


As can be seen in the above screenshot and code fragment, the application layout defines three kinds of sections: a header (a static section displaying a logo), a menu (displaying links to sub pages) and a contents section that displays the content based on the sub page that was selected by the user (in the menu or by opening a URL).

The menu section is displayed as a site map. This site map will be used as the basis for the implementation of the dynamic menus that I have described earlier in this blog post.

Implementing a folding menu


Turning a site map into a folding menu, by using only HTML and CSS, is a relatively straight forward process. To explain the concepts, I can use the following trivial HTML page as a template:


The above page only contains a root link and nested unordered list representing a site map.

In CSS, we can hide the root link and the nested unordered lists by default with the following rules:

/* This rule hides the root link */
body > a
{
    display: none;
}

/* This rule hides nested unordered lists */
ul li ul
{
    display: none;
}

resulting in the following page:


With the following rule, we can make a nested unordered list visible when a user hovers over the surrounding list item:

ul li:hover ul
{
    display: block;
}

Resulting in a web page that behaves as follows:


As can be seen, the unordered list that is placed under the Page 2 link became visible because the user hovers over the surrounding list item.

I can make the menu a bit more fancy if I want to. For example, I can remove the bullet points with the following CSS rule:

ul
{
    list-style-type: none;
    margin: 0;
    padding: 0;
}

I can add borders around the list items to make them appear as buttons:

ul li
{
    border-style: solid;
    border-width: 1px;
    padding: 0.5em;
}

I can horizontally align the buttons by adopting a flexbox layout using the row direction property:

ul
{
    display: flex;
    flex-direction: row;
}

I can position the sub menus right under the buttons of the main menu by using a combination of relative and absolute positioning:

ul li
{
    position: relative;
}

ul li ul
{
    position: absolute;
    top: 2.5em;
    left: 0;
}

Resulting in a menu with the following behaviour:


As can be seen, the trivial example application provides a usable folding menu thanks to the CSS rules that I have described.

In my example application bundled with the layout framework, I have applied all the rules shown above and combined them with the already existing CSS rules, resulting in a web application that behaves as follows:


Displaying a mobile navigation menu


As explained in the introduction, another type of dynamic menu that has been universally accepted is the mobile navigation menu (also known as a hamburger menu). Implementing such a menu, despite its popularity, is challenging IMHO.

Although there seem to be ways to implement such a menu without JavaScript (such as this example using a checkbox) the only proper way to do it IMO is still to use JavaScript. Some browsers have trouble accepting such HTML+CSS-only implementations and it requires the use of an HTML element (an input element) that is not designed for that purpose.

In my example web application, I have implemented a custom JavaScript module, that dynamically transforms a site map (that may have already been displayed as a folding menu) into a mobile navigation menu by performing the following steps:

  • We query the root link of the site map and transform it into a mobile navigation menu button by replacing the text of the root link by an icon image. Clicking on the menu button makes the navigation menu visible or invisible.
  • The first level sub menu becomes visible by adding the CSS class: navmenu_active to the unordered list.
  • The menu button becomes active by adding the CSS class: navmenu_icon_active to the image of the root link.
  • Nested menus can be unfolded or folded. The JavaScript code adds fold icons to each list item of the unordered lists that embed a nested unordered list.
  • Clicking on the fold icon makes the nested unordered list visible or invisible.
  • A nested unordered list becomes visible by adding the CSS class: navsubmenu_active to the unordered list
  • A fold button becomes active by adding the CSS class: navmenu_unfold_active to the fold icon image

It was quite a challenge to implement this JavaScript module, but it does the trick. Moreover, the basis remains a simple HTML-rendered site map that can still be used in text-oriented browsers.

The result of using this JavaScript module is the following navigation menu that has unfoldable sub menus:


Concluding remarks


In this blog post, I have explained a new feature addition to my layout framework: the SiteMapSection that can be used to render menu sections as site maps. Site maps can be used as a basis to implement dynamic menus, such as folding menus and mobile navigation menus.

The benefit of using a site map as a basis ingredient is that a web page still remains useful in its most primitive form: text. As a result, I retain two important requirements of my web framework: declarativity (because a nested unordered list describes concisely what I want) and the ability to degrade gracefully (because it stays useful when it is rendered as text).

Developing folding/navigation menus in the way I described is not something new. There are plenty of examples on the web that show how such features can be developed, such as these W3Schools dropdown menu and mobile navigation menu examples.

Compared to many existing solutions, my approach is somewhat puristic -- I do not abuse HTML elements (such as a check box), I do not rely on using helper elements (such as divs and spans) or helper CSS classes/ids. The only exception is to support dynamic features that are not part of HTML, such as "active links" and the folding/unfolding buttons of the mobile navigation menu.

Although it has become possible to use my framework to implement mobile navigation menus, I still find it sad that I have to rely on JavaScript code to do it properly.

Folding menus, despite their popularity, are nice but the basic one-level menus (that only display a collection of links/buttons of sub pages) are in my opinion fine too and much simpler -- the same implementation is usable on desktops, mobile devices and text-oriented browsers.

With folding menus, I have to test multiple resolutions and devices to check whether they provide the right user experience. Folding menus are useless on mobile devices --- you cannot separately trigger a hover event without generating a click event, making it impossible to unfold a sub menu and peek what is inside.

When it is also desired to provide an optimal mobile device experience, you also need to implement an alternative menu. This requirement makes the implementation of a web application significantly more complex.

Availability


The SiteMapSection has become a new feature of the Java, PHP and JavaScript implementations of my layout framework and can be obtained from my GitHub page.

In addition, I have added a sitemapmenu example web application that displays a site map section in multiple ways:

  • In text mode, it is just displayed as a (textual) site map
  • In graphics mode, when the screen width is 1024 pixels or greater, it displays a horizontal folding menu.
  • In graphics mode, when the screen width is smaller than 1024 pixels and JavaScript is disabled, it displays a vertical folding menu.
  • In graphics mode, when the screen width is smaller than 1024 pixels and JavaScript is enabled, it displays a mobile navigation menu.

Friday, December 30, 2022

Blog reflection over 2022

Today, it is my blog's anniversary. As usual, this is a nice opportunity to reflect over the last year.

Eelco Visser


The most shocking event of this year is the unfortunate passing of my former PhD supervisor: Eelco Visser. I still find it hard to believe that he is gone.

Although I left the university for quite some time now, the things I learned while I was employed at the university (such as having all these nice technical discussions with him) still have a profound impact on me today. Moreover, without his suggestion this blog would probably not exist.

Because the original purpose of my blog was to augment my research with extra details and practical information, I wrote a blog post with some personal anecdotes about him.

COVID-19 pandemic


In my previous blog reflection, I have explained that we were in the third-wave of the COVID pandemic caused by the even more contagious Omicron variant of the COVID-19 virus. Fortunately, it turned out that, despite being more contagious, this variant is less hostile than the previous Delta variant.

Several weeks later, the situation got under control and things were opened up again. The situation remained pretty stable afterwards. This year, it was possible for me to travel again and to go to physical concerts, which feels a bit weird after staying home for two whole years.

The COVID-19 virus is not gone, but the situation is under control in Western Europe and the United States. There have not been any lockdowns or serious capacity problems in the hospitals.

When the COVID-19 pandemic started, my employer: Mendix adopted a work-from-home-first culture. By default, people work from home and if they need to go to the office (for example, to collaborate) they need to make a desk reservation.

As of today, I am still working from home most of my time. I typically visit the office only once a week, and I use that time to collaborate with people. In the remaining days, I focus myself on development work as much as possible.

I have to admit that I like the quietness at home -- not everything can be done at home, but for programming tasks I need to think, and for thinking I need silence. Before the COVID-19 pandemic started, the office was typically very noisy making it sometimes difficult for me to focus.

Learning modern JavaScript features


I used to intensively work with JavaScript at my previous employer: Conference Compass, but since I joined Mendix I am mostly using different kinds of technologies. During my CC days, I was still mostly writing old fashioned (ES5) JavaScript code, and I still wanted to familiarise myself with modern ES6 features.

One of the challenging aspects of using JavaScript is asynchronous programming -- making sure that the main thread of your JavaScript application never blocks too long (so that it can handle multiple connections or input events) and keeping your code structured.

With old fashioned ES5 JavaScript code, I had to rely on software abstractions to keep my code structured, but with the addition of Promises/A+ and the async/await concepts to the core of the JavaScript language, this can be done in a much cleaner way without using any custom software abstractions.

In 2014, I wrote a blog post about the problematic synchronous programming concepts in JavaScript and their equivalent asynchronous function abstractions. This year, I wrote a follow-up blog post about the ES6 concepts that I should use (rather than software abstractions).

To motivate myself learning about ES6 concepts, I needed a practical use case -- I have ported the layout component of my web framework (for which a Java and PHP version already exist) to JavaScript using modern ES6 features, such as async/await, classes and modules.

An interesting property of the JavaScript version is that it can be used both on the server-side (as a Node.js application) and client-side (directly in the browser by dynamically updating the DOM). The Java and PHP versions only work server-side.

Fun projects


In earlier blog reflections I have also decided to spend more time on useless fun projects.

In the summer of 2021, when I decided not to do any traveling, I had lots of time left to tinker with all kinds of weird things. One of my hobby projects was to play around with my custom maps for Duke3D and Shadow Warrior that I created while I was still a teenager.

While playing with these maps, I noticed a number of interesting commonalities and differences between Duke3D and Shadow Warrior.

Although both games use the same game engine: the BUILD-engine, their game mechanics are completely different. As an exercise, I have ported one of my Duke3D maps to Shadow Warrior and wrote a blog post about the process, including a description of some of their different game mechanics.

Although I did the majority of the work already back in 2021, I have found some remaining free time in 2022 to finally finish the project.

Web framework improvements


This year, I have also intensively worked on improving several aspects of my own web framework. My custom web framework is an old project that I started in 2004 and many parts of it have been rewritten several times.

I am not actively working on it anymore, but once in a while I still do some development work, because it is still in use by a couple of web sites, including the web site of my musical society.

One of my goals is to improve the user experience of the musical society web site on mobile devices, such as phones and tablets. This particular area was already problematic for years. Despite making the promise to all kinds of people to fix this, it took me several years to actually take that step. :-).

To improve the user experience for mobile devices, I wanted to convert the layout to a flexbox layout, for which I needed to extend my layout framework because it does not generate nested divs.

I have managed to improve my layout framework to support flexbox layouts. In addition, I have also made many additional improvements. I wrote a blog post with a summary of all my feature changes.

Nix-related work


In 2022, I also did Nix-related work, but I have not written any Nix-related blog posts this year. Moreover, 2022 is also the first time since the end of the pandemic that a physical NixCon was held -- unfortunately, I have decided not to attend it.

The fact that I did not write any Nix-related blog posts is quite exceptional. Since 2010, the majority of my blog posts are Nix-related and about software deployment challenges in general. So far, it has never happened that there has been an entire year without any Nix-related blog posts. I think I need to explain a thing or two about what has happened.

This year, it was very difficult for me to find the energy to undertake any major Nix developments. Several things have contributed to that, but the biggest take-away is that I have to find the right balance.

The reason why I got so extremely out of balance is that I do most of my Nix-related work in my spare time. Moreover, my primary motivation to do Nix-related work is because of idealistic reasons -- I still genuinely believe that we can automate the deployment of complex systems in a much better way than the conventional tools that people currently use.

Some of the work for Nix and NixOS is relatively straight forward -- sometimes, we need to package new software, sometimes a package or NixOS service needs to be updated, or sometimes broken features need to be fixed or improved. This process is often challenging, but still relatively straight forward.

There are also quite a few major challenges in the Nix project, for which no trivial solutions exist. These are problem areas that cannot be solved with quick fixes and require fundamental redesigns. Solving these fundamental problems is quite challenging and typically require me to dedicate a significant amount of my free time.

Unfortunately, due to the fact that most of my work is done in my spare time, and I cannot multi-task, I can only work on one major problem area at the time.

For example, I am quite happy with my last major development project: the Nix process management framework. It has all features implemented that I want/need to consistently eat my own dogfood. It is IMHO a pretty decent solution for use cases where most conventional developers would normally use Docker/docker-compose for.

Unfortunately, to reach all my objectives I had to pay a huge price -- I have published the first implementation of the process management framework already in 2019, and all my major objectives were reached in the middle of 2021. As a consequence, I have spend nearly two years of my spare time only working on the implementation of this framework, without having the option to switch to something else. For the first six months, I remained motivated, but slowly I ran into motivational problems.

In this two-year time period, there were lots of problems appearing in other projects I used to be involved in. I could not get these projects fixed, because these projects also ran into fundamental problems requiring major redesigns/revisions. This resulted in a number of problems with members in the Nix community.

As a result, I got the feeling the I lost control. Moreover, doing anything Nix-related work also gave (and in some extent still gives) me a lot of negative energy.

Next year, I intend to return and I will look into addressing my issues. I am thinking about the following steps:

  • Leaving the solution of some major problem areas to others. One of such areas is NPM package deployments with Nix. node2nix's was probably a great tool in combination with older versions of NPM, but its design has reached the boundaries of what is possible already years ago.

    As a result, node2nix does not support the new features of NPM and does not solve the package scalability issues in Nixpkgs. It is also not possible to properly support these use cases by implementing "quick fixes". To cope with these major challenges and keep the solution maintainable, a new design is needed.

    I have already explained my ideas on the Discourse mailing list and outlined what such a new design could look like. Fortunately, there are already some good initiatives started to address these challenges.
  • Building prototypes and integrate the ideas into Nixpkgs rather than starting an independent project/tool that attracts a sub community.

    I have implemented the Nix process management framework as a prototype with the idea to show how certain concepts work, rather than advertising the project as a new solution.

    My goal is to write an RFC to make sure that these ideas get integrated into the upstream Nixpkgs, so that it can be maintained by the community and everybody can benefit from it.

    The only thing I still need to do is write that RFC. This should probably be one of my top priorities next year.
  • Move certain things out of Nixpkgs. The Nixpkgs project is a huge project with several thousands of packages and services, making it quite a challenge to maintain and implement fundamental changes.

    One of the side effects of its scale is that the Nixpkgs issue tracker is a good as useless. There are thousands of open issues and it is impossible to properly track the status of individual aspects in the Nixpkgs repository.

    Thanks to Nix flakes, which unfortunately is still an experimental feature, we should be able to move certain non-essential things out of Nixpkgs and conveniently deploy them from external repositories. I have some things that I could move out of the Nixpkgs repository when flakes have become a mainstream feature.
  • Better communication about the context in which something is developed. When I was younger, I always used to advertise a new project as the next great thing that everybody should use -- these days, I am more conservative about the state of my projects and I typically try to warn people upfront that something is just a prototype and not yet ready for production use.

Blog posts


In my previous reflection blog posts, I always used to reflect over my overall top 10 of most popular blog posts. There are no serious changes compared to last year, so I will not elaborate about them. The fact that I have not been so active on my blog this year has probably contributed that.

Concluding remarks


Next year I will look into addressing my issues with Nix development. I hope to return to my software deployment/Nix-related work next year!

The final thing I would like to say is:


HAPPY NEW YEAR!!!