Saturday, August 20, 2022

Porting a Duke3D map to Shadow Warrior


Almost six years ago, I wrote a blog post about Duke Nukem 3D, the underlying BUILD engine and my own total conversion that consists of 22 maps and a variety of interesting customizations.

Between 1997 and 2000, while I was still in middle school, I have spent a considerable amount of time developing my own maps and customizations, such as modified monsters. In the process, I learned a great deal about the technical details of the BUILD engine.

In addition to Duke Nukem 3D, the BUILD engine is also used as a basis for many additional games, such as Tekwar, Witchaven, Blood, and Shadow Warrior.

In my earlier blog post, I also briefly mentioned that in addition to the 22 maps that I created for Duke Nukem 3D, I have also developed one map for Shadow Warrior.

Last year, in my summer holiday, that was still mostly about improvising my spare time because of the COVID-19 pandemic, I did many interesting retro-computing things, such as fixing my old computers. I also played a bit with some of my old BUILD engine game experiments, after many years of inactivity.

I discovered an interesting Shadow Warrior map that attempts to convert the E1L2 map from Duke Nukem 3D. Since both games use the BUILD engine with mostly the same features (Shadow Warrior uses a slightly more advanced version of the BUILD engine), this map inspired me to also port one of my own Duke Nukem 3D maps, as an interesting deep dive to compare both game's internal concepts.

Although most of the BUILD engine and editor concepts are the same in both games, their game mechanics are totally different. As a consequence, the porting process turned out to be very challenging.

Another reason that it took me a while to complete the project is because I had to put it on hold in several occasions due to all kinds obligations. Fortunately, I have managed to finally finish it.

In this blog post, I will describe some of the things that both games have in common and the differences that I had to overcome in the porting process.

BUILD engine concepts


As explained in my previous blog post, the BUILD-engine is considered a 2.5D engine, not a true 3D engine due to the fact that it had to cope with all kinds of technical limitations of home computers commonly used at that time.

In fact, most of the BUILD-engine concepts are two-dimensional -- maps are made out two-dimensional surfaces called sectors:


The above picture shows a 2-dimensional top-level view of my ported Shadow Warrior map. Sectors are two dimensional areas surrounded by walls -- the white lines denote solid walls and red lines the walls between adjacent sectors. Red walls are invisible in 3D mode.

The purple and cyan colored objects are sprites (objects that typically provide some form of interactivity with the player, such as monsters, weapons, items or switches). The "sticks" that are attached to the sprites indicate in which direction the sprite is facing. When a sprite is purple, it will block the player. Cyan colored sprites allow a player to move through it.

You can switch between 2D and 3D mode in the editor by pressing the Enter-key on the numeric key pad.

In 3D mode, each sector's ceiling and floor can be given its own height, and we can configure textures for the walls, floors and ceilings (by pointing to any of these objects and pressing the 'V' key) giving the player the illusion to walk around in a 3D world:


In the above screenshot, we can see the corresponding 3D view of the 2D grid shown earlier. It consists of an outdoor area, grass, a lane, and the interior of the building. Each of these areas are separate 2D sectors with their own custom floor and ceiling heights, and their own textures.

The BUILD engine has all kinds of limitations. Although a world may appear to be (somewhat) 3-dimensional, it is not possible to stack multiple sectors on top of each other and simultaneously see them in 3D mode, although there are some tricks to cope with that limitation.

(As a sidenote: Shadow Warrior has a hacky feature that makes it possible for a player to observe multiple rooms stacked on top of each other, by using specialized wall/ceiling textures, special purpose sprites and a certain positioning of the sectors themselves. Sectors in the map are still separated, but thanks to the hack they can be visualized in such a way that they appear to be stacked on top of each other).

Moreover, the BUILD engine can also not change the perspective when a player looks up or down, although there is the possibility to give a player that illusion by stretching the walls. (As a sidenote: modern source ports of the BUILD engine have been adjusted to use Polymost, an OpenGL rendering extension, which actually makes it possible to provide a true 3D look).

Monsters, weapons, items, and most breakable/movable objects are sprites. Sprites are not really "true 3D" objects. Normally, sprites will always face the player from the same side, regardless of the position or the perspective of the player:

As can be seen, the guardian sprite always faces the player from the front, regardless of the angle of the camera.

Sprites can also be flattened and rotated, if desired. Then they will appear as a flat surface to the player:


For example, the wall posters in the screenshot above are flattened and rotated sprites.

Shadow warrior uses a slightly upgraded BUILD engine that can provide a true 3D experience for certain objects (such as weapons, items, buttons and switches) by displaying them as voxels (3D pixels):


The BUILD engine that comes with Duke Nukem 3D lacks the ability to display voxels.

Porting my Duke Nukem 3D map to Shadow Warrior


The map format that Duke Nukem 3D and Shadow Warrior use are exactly the same. To be precise: they both use version 7 of the map format.

At first, it seemed to look relatively straight forward to port a map from one game to another.

The first step in my porting process was to simply make a copy of the Duke Nukem 3D map and open it in the Shadow Warrior BUILD editor. What I immediately noticed is that all the textures and sprites look weird. The textures still have the same indexes and refer to textures in the Shadow Warrior catalog that are completely different:


Quite a bit of my time was spent on fixing all textures and sprites by looking for suitable replacements. I ended up replacing textures for the rocks, sky, buildings, water, etc. I also had to replace the monsters, weapons, items and other dynamic objects, and overcome some limitations for the player in the map, such as the absence of a jet pack. The process was a labourious, but straight forward.

For example, this is how I have fixed the beach area:


I have changed the interior of the office building as follows:


And the back garden as follows:


The nice thing about the garden area is that Shadow Warrior has a more diverse set of vegetation sprites. Duke Nukem 3D only has palm trees.

Game engine differences


The biggest challenge for me was porting the interactive parts of the game. As explained earlier, game mechanics are not implemented by the engine or the editor. BUILD-engine games are separated into an engine and game part in which only the former component is generalized.

This diagram (that I borrowed from a Duke Nukem 3D code review article written by Fabien Sanglard) describes the high-level architecture of Duke Nukem 3D:


In the above diagram, the BUILD engine (on the right) is general purpose component developed by Ken Silverman (the author of the BUILD engine and editor) and shipped as a header and object code file to 3D Realms. 3D Realms combines the engine with the game artifacts on the left to construct a game executable (DUKE3D.EXE).

To configure game effects in the BUILD editor, you need to annotate objects (walls, sprites and sectors) with tags and add special purpose sprites to the map. To the editor these objects are just meta-data, but the game engine treats them as parameters to create special effects.

Every object in a map can be annotated with meta data properties called Lotags and Hitags storing a 16-bit numeric value (by using the Alt+T and Alt+H key combinations in 2D mode).

In Shadow Warrior, the tag system was extended even further -- in addition to Lotags and Hitags, objects can potentially have 15 numerical tags (TAG1 corresponds to the Hitag, and TAG2 to the Lotag) and 11 boolean tags (BOOL1-BOOL11). In 2D mode, these can be configured with the ' and ; keys in combination with a numeric key (0-9).

We can also use special purpose sprites that are visible in the editor, but hidden in the game:


In the above screenshot of my Shadow Warrior map, there are multiple special purpose sprites visible: the ST1 sprites (that can be used to control all kinds of effects, such as moving a door). ST1 sprites are visible in the editor, but not in the game.

Although both games use the same principles for configuring game effects, their game mechanics are completely different.

In the next sections, I will show all the relevant game effects in my Duke Nukem 3D map and explain how I translated them to Shadow Warrior.

Differences in conventions


As explained earlier, both games frequently use Lotags and Hitags to create effects.

In Duke Nukem 3D, a Lotag value typically determines the kind of effect, while an Hitag value is used as a match tag to group certain events together. For example, multiple doors can be triggered by the same switch by using the same match tag.

Shadow Warrior uses the opposite convention -- a Hitag value typically determines the effect, while a Lotag value is often used as a match tag.


Furthermore, in Duke Nukem 3D there are many kinds of special purpose sprites, as shown in the screenshot above. The S-symbol sprite is called a Sector Effector that determines the kind of effect that a sector has, the M-symbol is a MUSIC&SFX sprite used to configure a sound for a certain event, and a GPSPEED sprite determines the speed of an effect.

Shadow Warrior has fewer special purpose sprites. In almost all cases, we end up using the ST1 sprite (with index 2307) for the configuration of an effect.

ST1 sprites typically combine multiple interactivity properties. For example, to make a sector a door, that opens slowly, produces a sound effect and that closes automatically, we need to use three Sector Effector sprites and one GPSPEED sprite in Duke Nukem 3D. In Shadow Warrior, the same is accomplished by only using two ST1 sprites.

The fact that the upgraded BUILD engine in Shadow Warrior makes it possible to change more than two numerical tags (and boolean values), makes it possible to combine several kinds of functionality into one sprite.

Co-op respawn points



To make it possible to play a multiplayer cooperative game, you need to add co-op respawn points to your map. In Duke Nukem 3D, this can be done by adding seven sprites with texture 1405 and setting the Lotag value of the sprites to 1. Furthermore, the player's respawn point is also automatically a co-op respawn point.

In Shadow Warrior, co-op respawn points can be configured by adding ST1 sprites with Hitag 48. You need eight of them, because the player's starting point is not a co-op start point. Each respawn point requires a unique Lotag value (a value between 0 and 7).

Duke match/Wang Bang respawn points


For the other multiplayer game mode: Duke match/Wang Bang, we also need re-spawn points. In both games the process is similar to their co-op counterparts -- in Duke Nukem 3D, you need to add seven sprites with texture 1405, and set the Lotag value to 0. Moreover, the player's respawn point is also a Duke Match respawn point.

In Shadow Warrior, we need to use ST1 sprites with a Hitag value of 42. You need eight of them and give each of them a unique Lotag value between 0-7 -- the player's respawn point is not a Wang Bang respawn point.

Underwater areas


As explained earlier, the BUILD engine makes it possible to have overlapping sectors, but they cannot be observed simultaneously in 3D mode -- as such, it is not possible to natively provide a room over room experience, although there are some tricks to cope with that limitation.

In both games it is possible to dive into the water and swim in underwater areas, giving the player some form of a room over room experience. The trick is that the BUILD engine does not render both sectors. When you dive into the water or surface again, you get teleported from one sector to another sector in the map.


Although both games use a similar kind of teleportation concept for underwater areas, they are configured in a slightly different way.

In both games, you need the ability to sink into the water in the upper area. In Duke Nukem 3D, the player automatically sinks by giving the sector a Lotag value of 1. In Shadow Warrior, you need to add a ST1 sprite with a Hitag value of 0, and a Lotag value that determines how much the player will sink. 40 is typically a good value for water areas.

The underwater sector in Duke Nukem 3D needs a Lotag value of 2. In the game, the player will automatically swim when it enters the sector and the colors will be turned blue-ish.

We also need to determine from what position in a sector a player will teleport. Both the upper and lower sector should have the same 2 dimensional shape. In Duke Nukem 3D, teleportation can be specified by two Sector Effector sprites having a Lotag 7. These sprites need to be exactly in the same position in the upper and lower sectors. The Hitag value (match tag) needs to be the same:


In the screenshot above, we should see a 2D grid with two Sector Effector sprites having a Lotag of 7 (teleporter) and unique match tags (110 and 111). Both the upper and underwater sectors have exactly the same 2-dimensional shape.

In Shadow Warrior, teleportation is also controlled by sprites that should be in exactly the same position in the upper and lower sectors.

In the upper area, we need an ST1 sprite with a Hitag value of 7 and a unique Lotag value. In the underwater area, we need an ST1 sprite with an Hitag value of 8 and the same match Lotag. The latter ST1 sprite (with Hitag 8) automatically lets the player swim. If the player is an under water area where he can not surface, the match Lotag value should be 0.

In Duke Nukem 3D the landscape will automatically look blue-ish in an underwater area. To make the landscape look blue-ish in Shadow Warrior, we need to adjust the palette of the walls, floors and ceilings from 0 to 9.


Garage doors


In my map, I commonly use garage/DOOM-style doors that move up when you touch them.


In Duke Nukem 3D, we can turn a sector into a garage door by giving it a Lotag value of 20 and lowering the ceiling in such a way that it touches the floor. By default, opening a door does not produce any sound. Moreover, a door will not close automatically.

We can adjust that behaviour by placing two special purpose sprites in the door sector:

  • By adding a MUSIC&SFX sprite we can play a sound. The Lotag value indicates the sound number. 166 is typically a good sound.
  • To automatically close the door after a certain time interval, we need to add a Sector Effector sprite with Lotag 10. The Hitag indicates the time interval. For many doors, 100 is a good value.


In the above screenshot, we can see what the garage door looks like if I slightly move the ceiling up (normally the ceiling should touch the floor). There is both a MUSIC&SFX (to give it a sound effect) as well as a Sector Effector sprite (to ensure that the door gets closed automatically) in the door sector.

In Shadow Warrior, we can accomplish the same thing by adding an ST1 sprite to the door sector with Hitag 92 (Vator). A vator is a multifunctional concept that can be used to move sectors up and down in all kinds of interesting ways.

An auto closing garage door can be configured by giving the ST1 sprite the following tag and boolean values:

  • TAG2 (Lotag). Is a match that that should refer to a unique numeric value
  • TAG3 specifies the type of vator. 0 indicates that it is operated manually or by a switch/trigger
  • TAG4 (angle) specifies the speed of the vator. 350 is a reasonable value.
  • TAG9 specifies the auto return time. 35 is a reasonable value.
  • BOOL1 specifies whether the door should be opened by default. Setting it to 1 (true) allows us to keep the door open in the editor, rather than moving the ceiling down so that it touches the floor.
  • BOOL3 specifies whether the door could crush the player. We set it to 1 to prevent this from happening.

By default, a vator moves a sector down on first use. To make the door move up, we must rotate the ST1 sprite twice in 3D mode (by pressing the F key twice).

We can configure a sound effect by placing another ST1 sprite near the door sector with a Hitag value of 134. We can use TAG4 (angle) to specify the sound number. 473 is a good value for many doors.


In the above screenshot, we should see what a garage door looks like in Shadow Warrior. The rotated ST1 sprite defines the Vator whereas the regular ST1 provides the sound effect.

Lifts


Another prominent feature of my Duke Nukem 3D map are lifts that allow the player to reach the top or roofs of the buildings.


In Duke Nukem 3D, lift mechanics are a fairly concept -- we should give a sector a Lotag value of 17 and the sector will automatically move up or down when the player presses the use key while standing in the sector. The Hitag of a MUSIC&SFX sprite determines the stop sound and a Lotag value the start sound.

In Shadow Warrior, there is no direct equivalent of the same lift concept, but we can create a switch-operated lift by using the Vator concept (the same ST1 sprite with Hitag 92 used for garage doors) with the following properties:

  • TAG2 (Lotag) should refer to a unique match tag value. The switches should use the exact same value.
  • TAG3 determines the type of vator. 1 is used to indicate that it can only be operated by switches.
  • TAG4 (Angle) determines the speed of the vator. 325 is a reasonable value.

We have to move the ST1 sprite to the same height where the lift should arrive after it was moved up.

Since it is not possible to respond to the use key while the player is standing in the sector, we have to add switches to control the lift. A possible switch is sprite number 575. The Hitag should match the Lotag value of the ST1 sprite. The switch sprite should have a Lotag value of 206 to indicate that it controls a Vator.


The above screenshot shows the result of my porting effort -- switches have been added, the MUSIC&SFX sprite was replaced by an equivalent ST1 sprite. The ST1 sprite that controls the movement is not visible because it was moved up to the same height as the adjacent upper floor.

Swinging doors


In addition to garage doors, my level also contains a number of swinging doors.

In Duke Nukem 3D, a sector can be turned into a swinging door by giving it a Lotag of 23 and moving the floor up a bit. We also need to add a Sector Effector with Lotag 11 and a unique Hitag value that acts as the door's pivot.

As with garage doors, they will not produce any sound effects or close automatically by default, unless we add a MUSIC&SFX and a Sector Effector sprite (with Lotag 10) to the door sector.


In Shadow Warrior, the rotating door concept is almost the same. We need to add an ST1 sprite with Hitag 144 and a unique Lotag value to the sector that acts as the door's pivot.

In addition, we need to add an ST1 sprite to the sector that configures a rotator:

  • TAG2/Lotag determines a unique match tag value that should be identical to the door's pivot ST1 sprite.
  • TAG3 determines the type of rotator. 0 indicates that it can be manually triggered or by a switch.
  • TAG5 determines the angle move amount. 512 specifies that it should move 90 degrees to the right. -512 is moving the door 90 degrees to the left.
  • TAG7 specifies the angle increment. 50 is a good value.
  • TAG9 specifies the auto return time. 35 is a good value.

As with garage doors, we also need to add an ST1 sprite (with Hitag 134) to produce a sound. TAG4 (the angle) can be used to specify the sound number. 170 is a good value for rotating doors.


Secret places



My map also has a number of secret places (please do not tell anyone :-) ). In Duke Nukem 3D, any sector that has a Lotag value of 32767 is considered a secret place. In Shadow Warrior the idea is the same -- any sector with a Lotag of 217 is considered a secret place.

Puzzle switches


Some Duke Nukem 3D maps also have so-called puzzle switches requiring the player to find the correct on-and-off combination to unlock something. In my map they are scattered all over the level to unlock the final key. The E2L1 map in Duke Nukem 3D shows a better example:


We can use the Hitag value to determine whether the switch needs to be switched off (0) or on (1). We can use the Lotag as a match tag to group multiple switches.

In Shadow Warrior, each switch uses a Hitag as a match tag and a Lotag value to configure the switch type. Giving a switch a Lotag value of 213 makes it a combo switch. TAG3 can be used set to 0 to indicate that it needs to be turned off and 1 that it needs to be turned on.

Skill settings


Both games have four skill levels. The idea is that the higher the skill level is, the more monsters you will have to face.

In Duke Nukem 3D you can specify the minimum skill level of a monster by giving the sprite a Lotag value that corresponds to the minimum skill level. For example, giving a monster a Lotag value of 2 means that it will only show up when the skill level is two or higher (Skill level 2 corresponds to: Let's rock). 0 (the default value) means that it will show up in any skill level:


In Shadow Warrior, each sprite has its own dedicated skill attribute that can be set by using the key combination ' + K. The skill level is displayed as one of the sprite's attributes.


In the above screenshot, the sprite on the left has a S:0 prefix meaning that it will be visible in skill level 0 or higher. The sprite on the right (with a prefix: S:2) appears from skill level 2 or higher.

End switch


In both games, you typically complete a level by touching a so-called end switch. In Duke Nukem 3D an ending switch can be created by using sprite 142 and giving it a Lotag of 65535. In Shadow Warrior the idea is the same -- we can create an end switch by using sprite 2470 and giving it a Lotag of 116.


Conclusion


In this blog post, I have described the porting process of a Duke Nukem 3D map to Shadow Warrior and explained some of the properties that are common and different in both games.

Although this project was a pretty useless project (the game is quite old, from the late 90s), I had a lot of fun doing it after not having touched this kind of technology for over a decade. I am quite happy with the result:


Despite the fact that this technology is old, I am still quite surprised to see how many maps and customizations are still being developed for these ancient games. I think this can be attributed to the fact that these engines and game mechanics are highly customizable and still relatively simple to use due to the technical limitations at the time they were developed.

Since I did most of my mapping/customization work many years before I started this blog, I thought that sharing my current experiences can be useful for others who intend to look at these games and creating their own customizations.

Wednesday, April 20, 2022

In memoriam: Eelco Visser (1966-2022)

On Tuesday 5 April 2022 I received the unfortunate news that my former master's and PhD thesis supervisor: Eelco Visser has unexpectedly passed away.

Although I made my transition from academia to industry almost 10 years ago and I have not been actively doing academic research anymore (I published my last paper in 2014, almost two years after completing my PhD), I have always remained (somewhat) connected to the research world and the work carried out by my former supervisor, who started his own programming languages research group in 2013.

He was very instrumental in the domain-specific programming languages research domain, but also in the software deployment research domain, a very essential part in almost any software development process.

Without him and his ideas, his former PhD student: Eelco Dolstra would probably never have started the work that resulted in the Nix package manager. As a consequence, my research on software deployment (resulting in Disnix and many other Nix-related tools and articles) and this blog would also not exist.

In this blog post, I will share some memories of my time working with Eelco Visser.

How it all started


The first time I met Eelco was in 2007 when I was still a MSc student. I just completed the first year of the TU Delft master's programme and I was looking for an assignment for my master's thesis.

Earlier that year, I was introduced to a concept called model-driven development (also ambiguously called model-driven engineering/architecture, the right terminology is open to interpretation) in a guest lecture by Jos Warmer in the software architecture course.

Modeling software systems and automatically generating code (as much as possible), was one of the aspects that really fascinated me. Back then, I was already convinced that working from a higher abstraction level, with more "accessible" building blocks could be quite useful to hide complexity, reduce the chances on errors and make developers more productive.

In my first conversation with Eelco, he asked me why I was looking for a model-driven development assignment and he asked me various questions about my past experience.

I told him about my experiences with Jos Warmer's lecture. Although he seemed to understand my enthusiasm, he also explained me that his work was mostly about creating textual languages, not visual languages such as UML profiles, that are commonly used in MDA development processes.

He also specifically asked me about the compiler construction course (also part of the master's programme), that is required for essential basic knowledge about textual languages.

The compiler construction course (as it was taught in 2007) was considered to be very complex by many students, in particular the practical assignment. As a practical assignment, you had to rewrite the a parser from using GNU Bison (a LARL(1) parser) to LLnextgen (a LL(1) parser) and extend the reference compiler with additional object-oriented programming features. Moreover, the compiler was implemented in C, and relied on advanced concepts, such as function pointers and proper alignment of members in a struct.

I explained Eelco that despite the negative image of the course because of its complexity, I actually liked it very much. Already at a young age I had the idea to develop my own programming language, but I had no idea how to do it, but when I was exposed to all these tools and concepts I finally learned about all the missing bits and pieces.

I was also trying to convince him that I am always motivated to deep dive into technical details. As an example, I explained him that one of my personal projects is creating customized Linux distributions by following the Linux from Scratch book. Manually following all the instructions in the book is time consuming and difficult to repeat. To make deploying a customized Linux distribution doable, I developed my own automated solution.

After elaborating about my (somewhat crazy) personal project, he told me that there is an ongoing research project that I will probably like very much. A former PhD student of his: Eelco Dolstra developed the Nix package manager and this package manager was used as the foundation for an entire Linux distribution: NixOS.

He gave me a printed copy of Eelco Dolstra's thesis and convinced me that I should give NixOS a try.

Research assignment


After reading Eelco Dolstra's PhD thesis and trying out NixOS (that was much more primitive in terms of features compared to today's version), Eelco Visser gave me my first research assignment.

When he joined Delft University of Technology in 2006 (a year before I met him) as an associate professor, he started working on a new project called: WebDSL. Previously, most of his work was focused on the development of various kinds meta-tools for creating domain specific languages, such as:

  • SDF2 is a formalism used to write a lexical and context free syntax. It has many interesting features, such as a module system and scannerless parsing, making it possible to embed a guest language in an host language (that may share the same keywords). SDF2 was originally developed by Eelco Visser for his PhD thesis for the ASF+SDF Meta Environment.
  • ATerm library. A library that implements the annotated terms format to exchange structured data between tools. SDF2 uses it to encode parse/abstract syntax trees.
  • Stratego/XT. A language and toolset for program transformation.

WebDSL was a new step for him, because it is an application language (built with the above tools) rather than a meta language.

With WebDSL, in addition to just building an application language, he also had all kinds of interesting ideas about web application development and how to improve it, such as:

  • Reducing/eliminating boilerplate code. Originally WebDSL was implemented with JBoss and the Seam framework using Java as an implementation language, requiring you to write a lot of boilerplate code, such as getters/setters, deployment descriptors etc.

    WebDSL is declarative in the sense that you could more concisely describe what you want in a rich web application: a data model, and pages that should render content and data.
  • Improving static consistency checking. Java (the implementation language used for the web applications) is statically typed, but not every concern of a web application can be statically checked. For example, for interacting with a database, embedded SQL queries (in strings) are often not checked. In JSF templates, page references are not checked.

    With WebDSL all these concerns are checked before the deployment of an web application.

By the time I joined, he already assembled several PhD and master's students to work on a variety of aspects of WebDSL and the underlying tooling, such as Stratego/XT.

Obviously, in a development process of a WebDSL application, like any application, you will also eventually face a deployment problem -- you need to perform activities to make the application available for use.

For solving deployment problems in our department, Nix was already quite intensively used. For example, we had a Nix-based continuous integration service called the Nix buildfarm (several years later, its implementation was re-branded into Hydra), that built all bleeding edge versions of WebDSL, Stratego/XT and all other relevant packages. The Nix package manager was used by all kinds of people in the department to install bleeding edge versions of these tools.

My research project was to automate the deployment of WebDSL applications using tooling from the Nix project. In my first few months, I have packaged all the infrastructure components that a WebDSL application requires in NixOS (JBoss, MySQL and later Apache Tomcat). I changed WebDSL to use GNU Autotools as build infrastructure (which was a common practice for all Stratego/XT related projects at that time) and made subtle modifications to prevent unnecessary recompilations of WebDSL applications (such as making the root folder dynamically configurable) and wrote an abstraction function to automatically build WAR files.

Thanks to Eelco I ended up in a really friendly and collaborative atmosphere. I came in touch with his fellow PhD and master's students and we frequently had very good discussions and collaborations.

Eelco was also quite helpful in the early stages of my research. For example, whenever I was stuck with a challenge he was always quite helpful in discussing the underlying problem and bringing me in touch with people that could help me.

My master's thesis project


After completing my initial version of the WebDSL deployment tool, that got me familiarised with the basics of Nix and NixOS, I started working on my master's thesis which was a collaboration project between Delft University of Technology and Philips Research.

Thanks to Eelco I came in contact with a former master's thesis student and postdoc of his: Merijn de Jonge who was employed by Philips Research. He was an early contributor to the Nix project and collaborated on the first two research papers about Nix.

While working on my master's thesis I developed the first prototype version of Disnix.

During my master's thesis project, Eelco Dolstra, who was formerly a postdoc at Utrecht University joined our research group in Delft. Eelco Visser made sure that I got all the help from Eelco Dolstra about all technical questions about Nix.

Becoming a PhD student


My master's thesis project was a pilot for a bigger research project. Eelco Visser, Eelco Dolstra and Merijn de Jonge (I was already working quite intensively with them for my master's thesis) were working on a research project proposal. When the proposal got accepted by NWO/Jacquard for funding, Eelco Visser was the first to inform me about the project to ask me what I thought about it.

At that moment, I was quite surprised to even consider doing a PhD. A year before, I attended somebody's else's PhD defence (someone who I really considered smart and talented) and thought that doing such a thing myself was way out of my grasp.

I also felt a bit like an impostor because I had interesting ideas about deployment, but I was still in the process of finishing up/proving some of my points.

Fortunately, thanks to Eelco my attitude completely changed in that year -- during my master's thesis project he convinced me that the work I was doing is relevant. What I also liked is the attitude in our group to actively build tools, have the time and space to explore things, and eat our own dogfood with it to solve relevant practical problems. Moreover, much of the work we did was also publicly available as free and open source software.

As a result, I easily grew accustomed to the research process and the group's atmosphere and it did not take long to make the decision to do a PhD.

My PhD


Although Eelco Visser only co-authored one of my published papers, he was heavily involved in many aspects of my PhD. There are way too many things to talk about, but there are some nice anecdotes that I really find worth sharing.

OOPSLA 2008


I still remember the first research conference that I attended: OOPSLA 2008. I had a very quick publication start, with a paper covering an important aspect of master's thesis: the upgrade aspect for distributed systems. I had to present my work at HotSWUp, an event co-located with OOPSLA 2008.

(As a sidenote: because we had to put all our efforts in making the deadline, I had to postpone the completion of my master's thesis a bit, so it started overlap with my PhD).

It was quite an interesting experience, because in addition to the fact that it was my first conference, it was also my first time to travel to the United States and to step into an airplane.

The trip was basically a group outing -- I was joined by Eelco and many of his PhD students. In addition to my HotSWUp 2008 paper, we also had an OOPSLA paper (about the Dryad compiler), a WebDSL poster, and another paper about the implementation of WebDSL (the paper titled: "When frameworks let you down") to present.

I was surprised to see how many people Eelco knew at the conference. He was also actively encouraging us to meet up with people and bringing us in touch with people that he know that could be relevant.

We were having a good time together, but I also remember him saying that it is actually much better to visit a conference alone, rather than in a group. Being alone makes it much easier and more encouraging to meet new people. That lesson stuck and in many future events, I took the advantage of being alone as an opportunity to meet up.

Working on practical things


Once in a while I had casual discussions with him about ongoing things in my daily work. For my second paper, I had to travel to ICSE 2009 in Vancouver, Canada all by myself (there were some colleagues traveling to co-located events, but took different flights).

Despite the fact that I was doing research on Nix-related things, NixOS at that time was not my main operating system yet on my laptop because it was missing features that I consider a must-have in a Linux distribution.

In the weeks before the planned travel date, I was intensively working on getting all the software packaged that I consider important. One major packaging effort was getting KDE 4.2 to work, because I was dissatisfied with only having the KDE 3.5 base package available in NixOS. VirtualBox was another package that I consider critical, so that I could still run a conventional Linux distribution and Microsoft Windows.

Nothing about this work is considered scientific "research" that may result in a paper that we can publish. Nonetheless, Eelco recognized the value of making NixOS more usable and encouraged me to get all that software packaged. He even asked me: "Are you sure that you have packaged enough software in NixOS so that you can survive that week?"

Starting my blog


Another particularly helpful advice that he gave me is that I should start a blog. Although I had a very good start of my PhD, having a paper accepted in my first month and another several months later, I slowly ran into numerous paper rejections, with reviews that were not helpful at all.

I talked to him about my frustrations and explained that software deployment research is generally a neglected subject. There is no research-related conference that is specifically about software deployment (there used to be a working conference on component deployment, but by the time I became a PhD student it was no longer organized), so we always had to "package" our ideas into subjects for different kinds of conferences.

He gave me the advice to start a blog to increase my interaction with the research community. As a matter of fact, many people in our research group, including Eelco, had their own blogs.

It took me some time to take that step. First, I had to "catch up" on my blog with relevant background materials. Eventually, it paid off -- I wrote a blog post titled: Software deployment complexity to emphasize software deployment as an important research subject, and thanks to Eelco's Twitter network I came in touch with all kinds of people.

Lifecycle management


For most of my publication work, I intensively worked with Eelco Dolstra. Eelco Visser left most of the practical supervision to him. The only published paper that we co-authored was: "Software Deployment in a Dynamic Cloud: From Device to Service Orientation in a Hospital Environment".

There was also a WebDSL-related subject that we intensively worked on for a while, that unfortunately never fully materialized.

Although I had already had the static aspects of a WebDSL application deployment automated -- the infrastructure components (Apache Tomcat, MySQL) as well as a function to compile a Java Web application Archive (WAR) with the WebDSL compiler, we also had to cope with the data that a WebDSL application stores -- WebDSL data models can evolve, and when this happens, the data needs to be migrated from an old to a new table structure.

Sander Vermolen, a colleague of mine, worked on a solution to make automated data migrations of WebDSL possible.

At some point, we came up with the idea to make this all work together -- deployment automation and data migration from a high-level point of view hiding unimportant implementation details. Due to the lack of a better name we called this solution: "lifecycle management".

Although the project seemed to look straight forward to me in the beginning, I (and probably all of us) heavily underestimated how complex it was to bring Nix's functional deployment properties to data management.

For example, Nix makes it possible to store multiple variants of the same packages (e.g. old and new versions) simultaneously on a machine without conflicts and makes it possible to cheaply switch between versions. Databases, on the other hand, make imperative modifications. We could manage multiple versions of a database by making snapshots, but doing this atomically and in an portable way is very expensive, in particular when databases are big.

Fortunately, the project was not a complete failure. I have managed to publish a paper about a sub set of the problem (automatic data migrations when databases move from one machine to another and a snapshotting plugin system), but the entire solution was never fully implemented.

During my PhD defence he asked me a couple of questions about this subject, from which (of course!) I understood that it was a bummer that we never fully realized the vision that we initially came up with.

Retrospectively, we should have divided the problem into a smaller chunks and solve each problem one by one, rather than working on the entire integration right from the start. The integrated solution would probably still consist of many trade-offs, but it still would have been interesting to have come up with at least a solution.

PhD thesis


When I was about to write my PhD thesis, I was making the bold decision to not compose the chapters directly out of papers, but to write a coherent story using my papers as ingredients, similar to Eelco Dolstra's thesis. Although there are plenty of reasons to think of to not do such a thing (e.g. it takes much more time for a reading committee to review such a thesis), he was actually quite supportive in doing that.

On the other hand, I was not completely surprised by it, considering the fact that his PhD thesis was several orders of magnitude bigger than mine (over 380 pages!).

Spoofax


After I completed my PhD, and made my transition to industry, he and his research group relentlessly kept working on the solution ecosystem that I just described.

Already during my PhD, many improvements and additions were developed that resulted in the Spoofax language workbench, an Eclipse plugin in which all these technologies come together to make the construction of Domain Specific Languages as convenient as possible. For a (somewhat :-) ) brief history of the Spoofax language workbench I recommend you to read this blog post written by him.

Moreover, he also kept dogfooding his own practical problems. During my PhD, three serious applications were created with WebDSL: researchr (a social network for researchers sharing publications), Yellowgrass (an issue tracker) and Weblab (a system to facilitate programming exams). These applications are still maintained and used by the university as of today.

A couple of months after my PhD defence in 2013 (I had to wait for several months to get feedback and a date for my defence), he was awarded the prestigious Vici grant and became a full professor, starting his own programming language research group.

In 2014, when I was already in industry for two years, I was invited for his inauguration ceremony and was given another demonstration of what Spoofax has become. I was really impressed by all the new meta languages that were developed and what Spoofax looked like. For example, SDF2 evolved into SDF3, a new meta-language for developing Name Address bindings (NaBL) was developed etc.

Moreover, I liked his inauguration speech very much, in which he briefly demonstrated the complexities of computers and programming, and what value domain specific languages can provide.

Concluding remarks


In this blog post, I have written down some of my memories working with Eelco Visser. I did this in the spirit of my blog, whose original purpose was to augment my research papers with practical information and other research aspects that you normally never read about.

I am grateful for the five years that we worked together, that he gave me the opportunity to do a PhD with him, for all the support, the things he learned me, and the people who he brought me in touch with. People that I still consider friends as of today.

My thoughts are with his family, friends, the research community and the entire programming languages group (students, PhD students, Postdocs, and other staff).

Monday, February 14, 2022

A layout framework experiment in JavaScript

It has been a while since I wrote a blog post about front-end web technology. The main reason is that I am not extensively doing front-end development anymore, but once in a while I still tinker with it.

In my Christmas break, I wanted to expand my knowledge about modern JavaScript programming practices. To make the learning process more motivating, I have been digging up my old web layout framework project and ported it to JavaScript.

In this blog post, I will explain the rationale of the framework and describe the features of the JavaScript version.

Background


Several years ago, I have elaborated about some of the challenges that I faced while creating layouts for web applications. Although front-end web technology (HTML and CSS) were originally created for pages (not graphical user interfaces), most web applications nowadays are complex information systems that typically have to present collections of data to end-users in a consistent manner.

Although some concepts of web technology are powerful and straight forward, a native way to isolate layout from a page's content and style is still virtually non-existent (with the exception of frames that have been deprecated a long time ago). As a consequence, it has become quite common to rely on custom abstractions and frameworks to organize layouts.

Many years ago, I also found myself repeating the same patterns to implement consistent layouts. To make my life easier, I have developed my own layout framework that allows you to define a model of your application layout, that captures common layout properties and all available sub pages and their dynamic content.

A view function can render a requested sub page, using the path in the provided URL as a selector.

I have created two implementations of the framework: one in Java and another in PHP. The Java version was the original implementation but I ended up using the PHP version the most, because nearly all of the web applications I developed were hosted at shared web hosting providers only offering PHP as a scripting language.

Something that I consider both an advantage and disadvantage of my framework is that it has to generate pages on the server-side. The advantage of this approach is that pages rendered by the framework will work in many browsers, even primitive text-oriented browsers that lack JavaScript support.

A disadvantage is that server-side scripting requires a more complex server installation. Although PHP is relatively simple to set up, a Java Servlet container install (such as Apache Tomcat) is typically more complex. For example, you typically want to put it behind a reverse proxy that serves static content more efficiently.

Furthermore, executing server-side code for each request is also significantly more expensive (in terms of processing power) than serving static files.

The interesting aspect of using JavaScript as an implementation language is that we can use the framework both on the client-side (in the browser) as well as on the server-side (with Node.js). The former aspect makes it possible to host applications on a web servers that only serve static content, making web hosting considerably easier and cheaper.

Writing an application model


As explained earlier, my layout framework separates the model from a view. An application layout model can be implemented in JavaScript as follows:

import { Application } from "js-sblayout/model/Application.mjs";

import { StaticSection } from "js-sblayout/model/section/StaticSection.mjs";
import { MenuSection } from "js-sblayout/model/section/MenuSection.mjs";
import { ContentsSection } from "js-sblayout/model/section/ContentsSection.mjs";

import { StaticContentPage } from "js-sblayout/model/page/StaticContentPage.mjs";
import { HiddenStaticContentPage } from "js-sblayout/model/page/HiddenStaticContentPage.mjs";
import { PageAlias } from "js-sblayout/model/page/PageAlias.mjs";

import { Contents } from "js-sblayout/model/page/content/Contents.mjs";

/* Create an application model */

export const application = new Application(
    /* Title */
    "My application",

    /* Styles */
    [ "default.css" ],

    /* Sections */
    {
        header: new StaticSection("header.html"),
        menu: new MenuSection(0),
        submenu: new MenuSection(1),
        contents: new ContentsSection(true)
    },

    /* Pages */
    new StaticContentPage("Home", new Contents("home.html"), {
        "404": new HiddenStaticContentPage("Page not found", new Contents("error/404.html")),

        home: new PageAlias("Home", ""),

        page1: new StaticContentPage("Page 1", new Contents("page1.html"), {
            page11: new StaticContentPage("Subpage 1.1", new Contents("page1/subpage11.html")),
            page12: new StaticContentPage("Subpage 1.2", new Contents("page1/subpage12.html")),
            page13: new StaticContentPage("Subpage 1.3", new Contents("page1/subpage13.html"))
        }),

        page2: new StaticContentPage("Page 2", new Contents("page2.html"), {
            page21: new StaticContentPage("Subpage 2.1", new Contents("page2/subpage21.html")),
            page22: new StaticContentPage("Subpage 2.2", new Contents("page2/subpage22.html")),
            page23: new StaticContentPage("Subpage 2.3", new Contents("page2/subpage23.html"))
        }),
    }),

    /* Favorite icon */
    "favicon.ico"
);

The above source code file (appmodel.mjs) defines an ECMAScript module exporting an application object. The application object defines the layout of a web application with the following properties:

  • The title of the web application is: "My application".
  • All pages use: default.css as a common stylesheet.
  • Every page consists of a number of sections that have a specific purpose:
    • A static section (header) provides content that is the same for every page.
    • A menu section (menu, submenu) display links to sub pages part of the web application.
    • A content section (contents) displays variable content, such as text and images.
  • An application consists of multiple pages that display the same sections. Every page object refers to a file with static HTML code providing the content that needs to be displayed in the content section.
  • The last parameter refers to a favorite icon that is the same for every page.

Pages in the application model are organized in a tree-like data structure. The application constructor only accepts a single page parameter that refers to the entry page of the web application. The entry page can be reached by opening the web application from the root URL or by clicking on the logo displayed in the header section.

The entry page refers to two sub pages: page1, page2. The menu section displays links to the sub pages that are reachable from the entry page.

Every sub page can also refer to their own sub pages. The submenu section will display links to the sub pages that are reachable from a selected the sub page. For example, when page1 is selected the submenu section will display links to: page11, page12.

In addition to pages that are reachable from the menu sections, the application model also has hidden error pages and a home link that is an alias for the entry page. In many web applications, it is a common habit that in addition to clicking on the logo, a home button can also be used to redirect a user to the entry page.

Besides using the links in the menu sections, any sub page in the web application can be reached by using the URL as a selector. A common convention is to use the path components in the URL to determine which page and sub page need to be displayed.

For example, by opening the following URL in a web browser:

http://localhost/page1/page12

Brings the user to the second sub page of the first sub page.

When providing an invalid selector in the URL, such as http://localhost/page4, the framework automatically redirects the user to the 404 error page, because the page cannot be found.

Displaying sub pages in the application model


As explained earlier, to display any of the sub pages that the application model defines, we must invoke a view function.

A reasonable strategy (that should suit most needs) is to generate an HTML page, with a title tag composed the application and page's title, globally include the application and page-level stylesheets, and translate every section to a div using the section identifier as its id. The framework provides a view function that automatically performs this translation.

As a sidenote: for pages that require a more complex structure (for example, to construct a layout with more advanced visualizations), it is also possible to develop a custom view function.

We can create a custom style sheet: default.css to position the divs and give each section a unique color. By using such a stylesheet, the application model shown earlier may be presented as follows:


As can be seen in the screenshot above, the header section has a gray color and displays a logo, the menu section is blue, the submenu is red and the contents section is black.

The second sub page from the first sub page was selected (as can be seen in the URL as well as the selected buttons in the menu sections). The view functions that generate the menu sections automatically mark the selected sub pages as active.

With the Java and PHP versions (described in my previous blog post), it is a common practice to generate all requested pages server-side. With the JavaScript port, we can also use it on the client-side in addition to server-side.

Constructing an application that generates pages server-side


For creating web applications with Node.js, it is a common practice to create an application that runs its own web server.

(As a sidenote: for production environments it is typically recommended to put a more mature HTTP reverse proxy in front of the Node.js application, such as nginx. A reverse proxy is often more efficient for serving static content and has more features with regards to security etc.).

We can construct an application that runs a simple embedded HTTP server:

import { application } from "./appmodel.mjs";
import { displayRequestedPage } from "js-sblayout/view/server/index.mjs";
import { createTestServer } from "js-sblayout/testhttpserver.mjs";

const server = createTestServer(function(req, res, url) {
    displayRequestedPage(req, res, application, url.pathname);
});
server.listen(process.env.PORT || 8080);

The above Node.js application (app.mjs) performs the following steps:

  • It includes the application model shown in the code fragment in the previous section.
  • It constructs a simple test HTTP server that serves well-known static files by looking at common file extensions (e.g. images, stylesheets, JavaScript source files) and treats any other URL pattern as a dynamic request.
  • The embedded HTTP server listens to port 8080 unless a PORT environment variable with a different value was provided.
  • Dynamic URLs are handled by a callback function (last parameter). The callback invokes a view function from the framework that generates an HTML page with all properties and sections declared in the application layout model.

We can start the application as follows:

$ node app.mjs

and then use the web browser to open the root page:

http://localhost:8080

or any sub page of the application, such as the second sub page of the first sub page:

http://localhost:8080/page1/page12

Although Node.js includes a library and JavaScript interface to run an embedded HTTP server, it is very low-level. Its only purpose is to map HTTP requests (e.g. GET, POST, PUT, DELETE requests) to callback functions.

My framework contains an abstraction to construct a test HTTP server with reasonable set of features for testing web applications built with the framework, including serving commonly used static files (such as images, stylesheets and JavaScript files).

For production deployments, there is much more to consider, which is beyond the scope of my HTTP server abstraction.

It is also possible to use the de-facto web server framework for Node.js: express in combination with the layout framework:

import { application } from "./appmodel.mjs";
import { displayRequestedPage } from "js-sblayout/view/server/index.mjs";
import express from "express";

const app = express();
const port = process.env.PORT || 8080;

// Configure static file directories
app.use("/styles", express.static("styles"));
app.use("/image", express.static("image"));

// Make it possible to parse form data
app.use(express.json());
app.use(express.urlencoded({ extended: true }));

// Map all URLs to the SB layout manager
app.get('*', (req, res) => {
    displayRequestedPage(req, res, application, req.url);
});

app.post('*', (req, res) => {
    displayRequestedPage(req, res, application, req.url);
});

// Configure listening port
app.listen(port, () => {
    console.log("Application listening on port " + port);
});

The above application invokes express to construct an HTTP web server that listens to port 8080 by default.

In addition, express has been configured to serve static files from the styles and image folders, and maps all dynamic GET and POST requests to the displayRequestedPage view function of the layout framework.

Using the model client-side and dynamically updating the DOM


As already explained, using JavaScript as an implementation language also makes it possible to directly consume the application model in the browser and dynamically generate pages from it.

To make this possible, we only have to write a very minimal static HTML page:

<!DOCTYPE html>

<html>
    <head>
        <title>My page</title>
        <script type="module">
import { application } from "./appmodel.mjs";
import { initRequestedPage, updateRequestedPage } from "js-sblayout/view/client/index.mjs";

document.body.onload = function() {
    initRequestedPage(application);
};

document.body.onpopstate = function() {
    updateRequestedPage(application);
};
        </script>
    </head>

    <body>
    </body>
</html>

The above HTML page has the following properties:

  • It contains the bare minimum of HTML code to construct a page that is still valid HTML5.
  • We include the application model (shown earlier) that is identical to the application model that we have been using to generate pages server-side.
  • We configure two event handlers. When the page is loaded (onload) we initially render all required page elements in the DOM (including the sections that translate to divs). Whenever the user clicks on a link (onpopstate), we update the affected sections in the DOM.

To make the links in the menu sections work, we have to compose them in a slightly different way -- rather than using the path to derive the selected sub page, we have to use hashes instead.

For example, the second sub page of the first page can be reached by opening the following URL:

http://localhost/index.html#/page1/page21

The popstate event triggers whenever the browser's history changes, and makes it possible for the user to use the back and forward navigation buttons.

Generating dynamic content


In the example application model shown earlier, all sections are made out of static HTML code fragments. Sometimes it may also be desired to generate the sections' content dynamically, for example, to respond to user input.

In addition to providing a string with a static HTML code as a parameter, it is also possible to provide a function that generates the content of the section dynamically.

new StaticContentPage("Home", new Contents("home.html"), {
    ...
    hello: new StaticContentPage("Hello 10 times", new Contents(displayHello10Times))
})

In the above code fragment, we have added a new sub page the to entry page that refers to the function: displayHello10Times to dynamically generate content. The purpose of this function is to display the string: "Hello" 10 times:


When writing an application that generates pages server-side, we could implement this function as follows:

function displayHello10Times(req, res) {
    for(let i = 0; i < 10; i++) {
        res.write("<p>Hello!</p>\n");
    }
}

The above function follows a convention that is commonly used by applications using Node.js internal HTTP server:

  • The req parameter refers to the Node.js internal HTTP server's http.IncomingMessage object and can be used to retrieve HTTP headers and other request parameters.
  • The req.sbLayout parameter provides parameters that are related to the layout framework.
  • The res parameter refers to the Node.js internal HTTP server's http.ServerResponse object and can be used to generate a response message.

It is also allowed to declare the function above async or let it return a Promise so that asynchronous APIs can be used.

When developing a client-side application (that dynamically updates the browser DOM), this function should have a different signature:

function displayHello10Times(div, params) {
    let response = "";

    for(let i = 0; i < 10; i++) {
        response += "<p>Hello!</p>\n";
    }

    div.innerHTML = response;
}

In the browser, a dynamic content generation function accepts two parameters:

  • div refers to an HTMLDivElement in the DOM that contains the content of the section.
  • params provides layout framework specific properties (identical to req.sbLayout in the server-side example).

Using a templating engine


Providing functions that generate dynamic content (by embedding HTML code in strings) may not always be the most intuitive way to generate dynamic content. It is also possible to configure template handlers: the framework can invoke a template handler function for files with a certain extension.

In the following server-side example, we define a template handler for files with an .ejs extension to use the EJS templating engine:

import { application } from "./appmodel.mjs";
import { displayRequestedPage } from "js-sblayout/view/server/index.mjs";
import { createTestServer } from "js-sblayout/testhttpserver.mjs";

import * as ejs from "ejs";

function renderEJSTemplate(req, res, sectionFile) {
    return new Promise((resolve, reject) => {
        ejs.renderFile(sectionFile, { req: req, res: res }, {}, function(err, str) {
            if(err) {
                reject(err);
            } else {
                res.write(str);
                resolve();
            }
        });
    });
}

const server = createTestServer(function(req, res, url) {
    displayRequestedPage(req, res, application, url.pathname, {
        ejs: renderEJSTemplate
    });
});
server.listen(process.env.PORT || 8080);

In the above code fragment, the renderEJSTemplate function is used to open an .ejs template file and uses ejs.renderFile function to render the template. The resulting string is propagated as a response to the user.

To use the template handlers, we invoke the displayRequestedPage with an additional parameter that maps the ejs file extension to the template handler function.

In a client-side/browser application, we can define a template handler as follows:

<!DOCTYPE html>

<html>
    <head>
        <title>My page</title>
        <script type="text/javascript" src="ejs.js"></script>
        <script type="module">
import { application } from "./appmodel.mjs";
import { initRequestedPage, updateRequestedPage } from "js-sblayout/view/client/index.mjs";

const templateHandlers = {
  ejs: function(div, response) {
      return ejs.render(response, {});
  }
}

document.body.onload = function() {
    initRequestedPage(application, templateHandlers);
};

document.body.onpopstate = function() {
    updateRequestedPage(application, templateHandlers);
};
        </script>
    </head>

    <body>
    </body>
</html>

In the above code fragment, we define a templateHandlers object that gets propagated to the view function that initially renders the page (initRequestedPage) and dynamically updates the page (updateRequestedPage).

By adding the following sub page to the entry page, we can use an ejs template file to dynamically generate a page rather than a static HTML file or function:

new StaticContentPage("Home", new Contents("home.html"), {
    ...
    stats: new StaticContentPage("Stats", new Contents("stats.ejs"))
})

In a server-side application, we can use stats.ejs to display request variables:

<h2>Request parameters</h2>

<table>
    <tr>
        <th>HTTP version</th>
        <td><%= req.httpVersion %></td>
    </tr>
    <tr>
        <th>Method</th>
        <td><%= req.method %></td>
    </tr>
    <tr>
        <th>URL</th>
        <td><%= req.url %></td>
    </tr>
</table>

resulting in a page that may have the following look:


In a client-side application, we can use stats.ejs to display browser variables:

<h2>Some parameters</h2>

<table>
    <tr>
        <th>Location URL</th>
        <td><%= window.location.href %></td>
    </tr>
    <tr>
        <th>Browser languages</th>
        <td>
        <%
        navigator.languages.forEach(language => {
            %>
            <%= language %><br>
            <%
        });
        %>
        </td>
    </tr>
    <tr>
        <th>Browser code name</th>
        <td><%= navigator.appCodeName %></td>
    </tr>
</table>

displaying the following page:


Strict section and page key ordering


In all the examples shown previously, we have used an Object to define sections and sub pages. In JavaScript, the order of keys in an object is somewhat deterministic but not entirely -- for example, numeric keys will typically appear before keys that are arbitrary strings, regardless of the insertion order.

As a consequence, the order of the pages and sections may not be the same as the order in which the keys are declared.

When the object key ordering is a problem, it is also possible to use iterable objects, such as a nested array, to ensure strict key ordering:

import { Application } from "js-sblayout/model/Application.mjs";

import { StaticSection } from "js-sblayout/model/section/StaticSection.mjs";
import { MenuSection } from "js-sblayout/model/section/MenuSection.mjs";
import { ContentsSection } from "js-sblayout/model/section/ContentsSection.mjs";

import { StaticContentPage } from "js-sblayout/model/page/StaticContentPage.mjs";
import { HiddenStaticContentPage } from "js-sblayout/model/page/HiddenStaticContentPage.mjs";
import { PageAlias } from "js-sblayout/model/page/PageAlias.mjs";

import { Contents } from "js-sblayout/model/page/content/Contents.mjs";

/* Create an application model */

export const application = new Application(
    /* Title */
    "My application",

    /* Styles */
    [ "default.css" ],

    /* Sections */
    [
        [ "header", new StaticSection("header.html") ],
        [ "menu", new MenuSection(0) ],
        [ "submenu", new MenuSection(1) ],
        [ "contents", new ContentsSection(true) ],
        [ 1, new StaticSection("footer.html") ]
    ],

    /* Pages */
    new StaticContentPage("Home", new Contents("home.html"), [
        [ 404, new HiddenStaticContentPage("Page not found", new Contents("error/404.html")) ],

        [ "home", new PageAlias("Home", "") ],

        [ "page1", new StaticContentPage("Page 1", new Contents("page1.html"), [
            [ "page11", new StaticContentPage("Subpage 1.1", new Contents("page1/subpage11.html")) ],
            [ "page12", new StaticContentPage("Subpage 1.2", new Contents("page1/subpage12.html")) ],
            [ "page13", new StaticContentPage("Subpage 1.3", new Contents("page1/subpage13.html")) ]
        ])],

        [ "page2", new StaticContentPage("Page 2", new Contents("page2.html"), [
            [ "page21", new StaticContentPage("Subpage 2.1", new Contents("page2/subpage21.html")) ],
            [ "page22", new StaticContentPage("Subpage 2.2", new Contents("page2/subpage22.html")) ],
            [ "page23", new StaticContentPage("Subpage 2.3", new Contents("page2/subpage23.html")) ]
        ])],
        
        [ 0, new StaticContentPage("Last page", new Contents("lastpage.html")) ]
    ]),

    /* Favorite icon */
    "favicon.ico"
);

In the above example, we have rewritten the application model example to use strict key ordering. We have added a section with numeric key: 1 and a sub page with key: 0. Because we have defined a nested array (instead of an object), these section and page will come last (if we would have used an object, then they will appear first, which is undesired).

Internally, the Application and Page objects use a Map to ensure strict ordering.

More features


The framework has full feature parity with the PHP and Java implementations of the layout framework. In addition to the features described in the previous sections, it can also do the following:

  • Work with multiple content sections. In our examples, we only have one content section that changes when picking a menu item, but it is also possible to have multiple content sections.
  • Page specific stylesheets and JavaScript includes. Besides including CSS stylesheets and JavaScript files globally it can also be done on page level.
  • Using path components as parameters. Instead of selecting a sub page, it is also possible to treat a path component as a parameter and dynamically generate a response.
  • Internationalized pages. Each sub page uses a ISO localization code and the framework will pick the most suitable language in which the page should be displayed by default.
  • Security handlers. Every page can implements its own method that checks whether it should be accessible or not according to a custom security policy.
  • Controllers. It is also possible to process GET or POST parameters before the page gets rendered to decide what to do with them, such as validation.

Conclusion


In this blog post, I have described the features of the JavaScript port of my layout framework. In addition to rendering pages server-side, it can also be directly used in the web browser to dynamically update the DOM. For the latter aspect, it is not required to run any server-side scripting language making application deployments considerably easier.

One of the things I liked about this experiment is that the layout model is sufficiently high-level so that it can be used in a variety of application domains. To make client-side rendering possible, I only had to develop another view function. The implementation of the model aspect is exactly the same for server-side and client-side rendering.

Moreover, the newer features of the JavaScript language (most notably ECMAScript modules) make it much easier to reuse code between Node.js and web browsers. Before ECMAScript modules were adopted by browser vendors, there was no module system in the browser at all (Node.js has CommonJS) forcing me to implement all kinds of tricks to make a reusable implementation between Node.js and browsers possible.

As explained in the introduction of this blog post, web front-end technologies do not have a separated layout concern. A possible solution to cope with this limitation is to generate pages server-side. With the JavaScript implementation this is no longer required, because it can also be directly done in the browser.

However, this does still not fully solve my layout frustrations. For example, dynamically generated pages are poorly visible to search engines. Moreover, a dynamically rendered web application is useless to users that have JavaScript disabled, or a web browser that does not support JavaScript, such as text browsers.

Using JavaScript also breaks the declarative nature of web applications -- HTML and CSS allow you to write what the structure and style of a page without specifying how to render it. This has all kinds of advantages, such as the ability to degrade gracefully when certain features cannot be used, such as graphics. With JavaScript some of these properties are lost.

Still, this project was a nice distraction -- I already had the idea to explore this for several years. During the COVID-19 pandemic, I have read quite a few technical books, such as JavaScript: The Definitive Guide and learned that with the introduction of new language JavaScript features, such as ECMAScript modules, it would be possible to exactly the same implementation of the model both server-side and client-side.

As explained in my blog reflection over 2021, I have been overly focused on a single goal for almost two years and it started to negatively affect my energy level. This project was a nice short distraction.

Future work


I have also been investigating whether I could use my framework to create offline web applications with a consistent layout. Unfortunately, it does not seem to be very straight forward to do that.

It seems that it is not allowed to do any module imports from local files for security reasons. In theory, this restriction can be bypassed by packing up all the modules into a single JavaScript include with webpack.

However, it turns out that there is another problem -- it is also not possible to open any files from the local drive for security reasons. There is a file system access API in development, that is still not finished or mature yet.

Some day, when these APIs have become more mature, I may revisit this problem and revise my framework to also make offline web applications possible.

Availability


The JavaScript port of my layout framework can be obtained from my GitHub page. To use this framework client-side, a modern web browser is required, such as Mozilla Firefox or Google Chrome.