Monday, March 28, 2011

First computer

Not so long ago, I bought a new PC. In my previous PC, the video card died and I didn't find it useful to invest any money in a system which is almost 4 years old. In this blog post, I don't want to talk about my new computer, but about my first one which still gives me good memories.


My first computer experience started on a Commodore 64 owned by my parents. Actually, it was a more advanced Commodore 128 (shown in a picture from Wikipedia above), which was automatically booted in C64 mode due to a helper cartridge. The newer Commodore 128 model was still compatible with the older Commodore 64.

The amount of Commodore 64 system resources were very slim compared to systems these days. It included a MOS Technology 6510 processor running at ~1 MHz and 64 KiB of RAM (half of the amount could only be used by BASIC programs). In Commodore 128 mode, you had double the amount of RAM (128 KiB) available and the more advanced 8502 processor running on ~2 MHz.

Usage


The operating system and shell of both the Commodore 64 and 128 were also very simple. It was just a simple BASIC interpreter, which you could use for various tasks such as retrieving the contents of a disk and to load a program. Retrieving the contents of a disk was done by loading a special program named: '$' into the BASIC program space in memory:

LOAD "$",8

And by listing the BASIC program, you could see the disk contents:

LIST


Then you basically had to move your cursor to the line containing the program that you wanted to run then you had to type: LOAD and RUN to load and run the program. Maybe this process sounds scary, since we have fancy GUIs nowadays, but it wasn't so hard back then. I was able to do this when I was 6 years old.

Programming


My first experience with programming also started on this computer. For some reason, knowing that it was possible to create your own stuff (next to running somebody else's stuff) fascinated me. A cousin of mine (who already had some programming experience on the Commodore) showed me the basics. Moreover, I also owned several C64 programming books (given to me by some relatives) which I used as a reference, although I was not always able to understand all these concepts as a kid.

The first Commodore 64 BASIC program I ever wrote looked basically like this:

10 INPUT "WHAT IS YOUR NAME";A$
20 PRINT "HELLO ";A$;"!"

It was just a very simple program which asked the user to type his name and responded by sending a friendly greeting to the user. Of course, these two lines were a little boring, so usually I added two lines in the beginning which cleared the screen, changed the color of the text and I used some POKE'ing to change to colors of the main screen and screen border to make the program look a little prettier. Since the Commodore 64 uses special characters for clearing the screen and changing the color of the text (which I can't list here), I have included a screenshot showing the output of the program and a listing of the source code.


I wrote many more Commodore 64 BASIC programs, such as a variant of the first program to insult persons that I disliked, some simple games (guessing words and numbers or games using a dice) and a lot of useless stuff in which I tried to do things with graphics and to create games from it (but I lacked the skills/knowledge back then to do something really useful).

The challenging part of programming games, was that you had to understand the hardware architecture of the Commodore 64 quite well. Although an internal BASIC interpreter was available, there were no high level instructions for things such as creating sprites and sounds. Most of these things were achieved by using POKEs and PEEKs in BASIC to read and write values into the appropriate memory positions (the Commodore 128 BASIC interpreter was more advanced and did have some high level instructions, but I never used them). Moreover, BASIC was also much too slow for most games and therefore you had to write most things in assembly. But since writing stuff in assembly is not much more difficult then using POKEs and PEEKs this was not the biggest challenge.

Games





Although the Commodore 64 had only 16 predefined colors (which can't be changed), some games had good graphics and were exceptionally creative. Basically, the tiles of side scroller games consisted of programmable characters in multi-color mode. Each multi-color character could use 4 colors out of the predefined 16. Each character pixel could be 1 custom color (for the whole character), the background color or 2 predefined colors (shared with all other characters). Assigning a color value to each individual pixel was too expensive. In order to make something look nice, you really had to think ahead and use some creative tricks. I still have to admit that I'm impressed to see how those games have been developed.




One of the most advanced games I owned was Turrican, a side-scroller shooter game combining several elements of various games such as Metroid. I have uploaded some screenshots shown on the left. Moreover, I also remember this game very well because I had a hard time beating one of the bosses, which gave me nightmares when I was still a kid. The boss that I feared so much is shown in the last screenshot :-) . This game is often praised for it's high technical achievements, showing things people did not believe to be possible on a Commodore 64. The sequel: Turrican 2 contained even more advanced features, although I did not own a copy of it.


Experience


Except for playing games, writing many useless Commodore 64 and 128 BASIC programs, and some "experiments" in Commodore 64 assembly, I also wrote several programs that actually were a bit interesting (in some degree). The "coolest program" I wrote was a demo in which I rapidly abused POKEs and some delays to change the colors of the main screen and screen border, which created cool screen effects.

The most useful program I have created is probably the homework assistant, to help me learning my English/French vocabularies. It could be used to fill in Dutch words and their translations and to test your skills. It also had the ability to save your vocabularies on disk. Although creating the homework assistant was fun, I rarely used it during my studies :-) (probably because learning words wasn't that difficult for me).

Nowadays, computers are many times more powerful than my good old Commodore and have much more abilities. Moreover, throughout the years many more (high-level) programming languages, frameworks, libraries, paradigms and other techniques have been developed to make things more convenient. We have a lot of cool stuff, such as high resolution 3D graphics, but for some reason I don't get the impression that everything has become so many more times "better". Funny...

However, I'm happy to say that the Commodore 128 of my parents is still in my possession and it still seems to work, although I rarely turn it on nowadays.

References


For the screenshots included in this blog post, I used VICE, a collection of Free/Open-Source emulators for various CBM models, including the C64 and C128.

Monday, March 7, 2011

Self-adaptive deployment with Disnix

In an earlier blog post I have described Disnix, a Nix-based distributed service deployment system. Disnix uses declarative specifications of the services, infrastructure and distribution to automatically, reliably and efficiently deploy a service-oriented system using the purely functional deployment properties of Nix.

Although Disnix offers useful deployment features, networks in which service-oriented systems are deployed are often dynamic. Various events may occur, such as a crashing machine which disappears from the network, an addition of a new machine with new capabilities or a change of a capability, e.g. an increase of memory. Such events may partially or completely break a system or render a deployment scenario suboptimal.

For these types of events a redeployment may be required to fix the system or change the deployment to have all the desired non-functional constraints supported, which takes some effort. The infrastructure model must be updated reflecting the new configuration of machines. Moreover, a new distribution model must be written mapping services to the right machines in the network. Manually updating these models every time an event occurs is often a complex and time consuming process.


We have developed a self-adaptive framework on top of Disnix to deal with such events, shown in the figure above, to automatically redeploy a system in such a way that desired non-functional constraints are supported in case of an event.

At the top, an infrastructure generator is shown. This is a tool generating a Disnix infrastructure model using a discovery service, capturing the present machines in the network and their capabilities and properties.

The generated infrastructure model, however, may not contain all required properties for deployment, such as authentication credentials and other privacy-sensitive information. The infrastructure augmenter is used to augment those additional properties into the discovered infrastructure model.

Then the distribution generator is invoked with the services model and generated infrastructure model. This tool generates a Disnix distribution model and uses a policy described in a QoS model, to dynamically map services to machines using non-functional properties defined in the services and infrastructure models. Various algorithms can be invoked from the QoS model to generate a suitable mapping.

Finally, the generated infrastructure and distribution models along with the services model are passed to disnix-env, which performs the actual (re)deployment of the system as reliable and efficient as possible.


The current implementation uses Avahi for capturing machines in the network and their properties, but is not limited to a particular protocol. One can easily replace this tool by a different lookup service using SSDP or a custom protocol.

The distribution generator is a very generic and extensible approach. In a QoS model, a developer can pick from a range of distribution filters and combine them together to achieve a desired result. New algorithms can be easily integrated in this architecture.

{ services, infrastructure
, initialDistribution, previousDistribution
, filters
}:

filters.minsetcover {
  inherit services infrastructure;
  targetProperty = "cost";
  
  distribution = filters.mapAttrOnList {
    inherit services infrastructure;
    serviceProperty = "type";
    targetPropertyList = "supportedTypes";
  };
}

The code fragment above shows an example of a QoS model. This model is a function taking several arguments: the services model (provided by the user), the generated infrastructure model, an initial distribution model (which maps all services to all targets), a previous distribution model (can be used to reflect about upgrade effort) and a set of filter functions.

In the body of the expression various filter functions are used. Each filter function takes a candidate distribution and return a filtered distribution. These functions are composed together to achieve a desirable mapping. In the example above, we first map all services having a specific type attribute to machines which support that particular type (i.e. the supportedTypes attribute in the infrastructure model containing a list of possibilities). By using this filter function we can e.g. map Apache Tomcat web applications to servers running Apache Tomcat and MySQL databases to services running a MySQL DBMS. This is a very crucial feature, as we don't want to deploy a service to a machine which is not capable of running it.

In the outer function, we use a minimum set cover approximation method over the result of the previous function. This function can be used to find the minimum cost deployment given that every machine has a fixed cost price, no matter how many services it will host.


We have applied the dynamic Disnix framework to a number of use cases, such as several toy systems, ViewVC and an industrial case study from Philips research.

The dynamic Disnix framework is still under heavy development and is going to be part of Disnix 0.3. In the dynamic framework, a number of algorithms and division strategies are supported. For more information have a look at the 'A Self-Adaptive Deployment Framework for Service-Oriented Systems' paper, available from my publications page. I will present this paper at SEAMS 2011 (colocated with ICSE 2011) on Waikiki, Honolulu, Hawaii. You can find the slides on my talks page, once I have prepared them. See you in Hawaii!