The majority of the porting activities were quite straight forward, thanks to the SDL 1.2 to 2.0 migration guide. What impacted SDL_ILBM were the following:
- SDL 2.0 supports multiple windows, so we have to create (and discard) them in a slightly different way in the viewer app.
- There is a more advanced graphics rendering pipeline in SDL 2.0 that uses hardware acceleration (e.g. through OpenGL or Direct3D) where possible.
Properly supporting the latter aspect puzzled me a bit, because the migration guide did not clearly describe what the best way is to continuously render 8-bit palettized surfaces for every frame.
SDL_ILBM generates these kind of surfaces for the majority of ILBM images by default (images using the HAM viewport mode are notable exceptions, because they typically contain more than 256 distinct colors). I need to update these surfaces for every frame to support animated cycle ranges.
In this blog post, I will describe what I did to support this, because I couldn't find any hands on information about this elsewhere and I think this information might be helpful to others as well.
Rendering surfaces using SDL 1.2
In the old implementation using SDL 1.2, the viewer application basically did the following: It first parses an IFF file, then extracts the ILBM images from it and finally composes SDL_Surface instances from the ILBM images that are shown to the user. Then the application enters a main loop which responds to user's input and continuously updates what is being displayed in the window.
Expressing this in very simplified code, it looks as follows. First, we obtain an SDL_Surface that contains the represents an ILBM image that we want to display:
SDL_Surface *pictureSurface;
When using the default options, it produces a 8-bit palettized surface, unless you try to open an image using the HAM viewport setting.
Then we construct a window that displays the image. One of the options of the viewer is to make the dimensions of the window equal to the dimensions of the image:
SDL_Surface *windowSurface = SDL_SetVideoMode(pictureSurface->w, pictureSurface->h, 32, SDL_HWSURFACE | SDL_DOUBLEBUF);
Besides constructing a window, SDL_SetVideoMode() also returns an SDL surface representing the graphics that are displayed in the window. The SDL_HWSURFACE parameter is used to tell SDL that the pixel data should reside in hardware memory (video RAM) instead of software memory (ordinary RAM), which the picture surface uses.
Eventually we reach the main loop of the program that responds to user events (e.g. keyboard presses), updates the pixels in the picture surface when cycling mode is turned on and flips the logical and physical screen buffers so that the changes become visible:
while(TRUE) { /* Process events */ /* Modify the pixels of pictureSurface */ /* Blit picture surface on the window */ SDL_BlitSurface(pictureSurface, NULL, windowSurface, NULL); /* Update the screen */ SDL_Flip(windowSurface); }
After changing the palette and/or the pixels in the picture surface, we can simply use SDL_BlitSurface() to make the modified surface visible in the window surface. This function also converts the pixels into the format of the target surface automatically, which means that it should work for both 8-bit palettized surfaces as well as RGBA surfaces.
Rendering surfaces using SDL 2.0
Since SDL 2.0 has a more advanced graphics rendering pipeline, there are more steps that we need to perform. Constructing the same window with the same dimensions in SDL 2.0 must be done as follows:
SDL_Window *sdlWindow = SDL_CreateWindow("ILBM picture", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, pictureSurface->w, pictureSurface->h, 0);
Besides a window, we must also construct a renderer instance, that renders textures on the window:
SDL_Renderer *renderer = SDL_CreateRenderer(sdlWindow, -1, 0);
The renderer is capable of automatically scaling a texture to the window's dimensions if needed. The following instructions configure the renderer's dimensions and instruct it to use linear scaling:
SDL_SetHint(SDL_HINT_RENDER_SCALE_QUALITY, "linear"); SDL_RenderSetLogicalSize(renderer, pictureSurface->w, pictureSurface->h);
In SDL 2.0, a texture is basically a pixel surface that resides in hardware memory while an 'ordinary' surface resides in software memory. In SDL 1.2 both were SDL surfaces with a different flag parameter.
The previous steps were quite easy. However, while I was trying to port the main loop to SDL 2.0 I was a bit puzzled. In order to show something in the window, we must ensure that the pixels are in hardware memory (i.e. in the texture). However, we do not have direct access to a texture's pixels. One of the solutions that the migration guide suggests is to convert a surface (that resides in software memory) to a texture:
SDL_Texture *texture = SDL_CreateTextureFromSurface(renderer, surface);
The above function invocation creates a texture out of the surface and performs all necessary steps to do it, such as allocating memory for the texture and converting chunky pixels to RGBA pixels.
Although this function seems to do everything we need, it has two drawbacks. It allocates memory for the texture which we have to free ourselves over and over again, while we know that the texture always has the same size. Second, the resulting texture is a static texture and its pixels can only be modified through SDL_UpdateTexture(), which is also a slow operation. Therefore, it is not recommended to run it to render every frame.
A faster alternative (according to the migration guide) is to use a streaming texture:
SDL_Texture *texture = SDL_CreateTexture(renderer, SDL_PIXELFORMAT_RGBA8888, SDL_TEXTUREACCESS_STREAMING, pictureSurface->w, pictureSurface->h);
However, we cannot construct textures that store its pixels in chunky format, so we have to convert them from the surface's format to the texture's format. After studying the SDL documentation a bit, I stumbled upon SDL_ConvertPixels(), but that did not seem to work with the picture surface, because the function cannot convert surfaces with an indexed palette.
I ended up implementing the main loop as follows:
/* Construct a surface that's in a format close to the texture */ SDL_Surface *windowSurface = SDL_CreateRGBSurface(0, pictureSurface->w, pictureSurface->h, 32, 0, 0, 0, 0); void *pixels; int pitch; while(TRUE) { /* Process events */ /* Modify the pixels of pictureSurface */ /* * Blit 8-bit palette surface onto the window surface that's * closer to the texture's format */ SDL_BlitSurface(pictureSurface, NULL, windowSurface, NULL); /* Modify the texture's pixels */ SDL_LockTexture(texture, NULL, &pixels, &pitch); SDL_ConvertPixels(windowSurface->w, windowSurface->h, windowSurface->format->format, windowSurface->pixels, windowSurface->pitch, SDL_PIXELFORMAT_RGBA8888, pixels, pitch); SDL_UnlockTexture(texture); /* Make the modified texture visible by rendering it */ SDL_RenderCopy(renderer, texture, NULL, NULL); /* * Update the screen with any rendering performed since the * previous call */ SDL_RenderPresent(renderer); }
I introduced another SDL surface (windowSurface) that uses a format closer to the texture's format (RGBA pixels) so that we can do the actual conversion with SDL_ConvertPixels(). After modifying the pixels in the 8-bit palettized surface (pictureSurface), we blit it to the window surface, which automatically converts the pixels to RGBA format. Then we use the window surface to convert the pixels to the texture's format, and finally we render the texture to make it visible to end users.
This seems to do the trick for me and this is the result:
Moreover, if I enable cycling mode the bird and bunny also seem to animate smoothly.
Conclusion
In this blog post I have described my challenges of porting SDL_ILBM from SDL 1.2 to SDL 2.0. To be able to modify the pixels and the palette of an 8-bit palettized surface for every frame, I used a second surface that has a format closer to the texture's format, allowing me to easily convert and transfer those pixels to a streaming texture.
This approach may also be useful for porting classic 8-bit graphics games to SDL 2.0.
Moreover, besides SDL_ILBM, I also ported SDL_8SVX to SDL 2.0, which did not require any modifications in the code. Both packages can be obtained from my GitHub page.
Very helpful!
ReplyDeleteHey there, longtime lurker, first time commenter. I know this blog post is five years old but let me start by saying thank you for it, it was so comprehensive it helped me gain a much deeper understanding of SDL. But while playing around with this code in my own project, I've run into a problem when doing it to a 'sprite' rather than a window, i.e. it nukes the transparency. Are you aware of any way around this? I've gotten palette cycling to work, but it doesn't look great on a sprite with a huge block of color drawn behind it. Google and the SDL forums have provided no help so far. Any ideas? I've been at this problem for weeks!
ReplyDeleteThank you. I am working on migrating a game that uses 8-bit palletized mode and was also confused by the apparent lack of such modes in the renderer. Nice to actually have a working example to start from.
ReplyDeleteQuestion: Why not use SDL_CreateRGBSurfaceWithFormat for windowSurface to set it to RGBA8888 from the outset? Wouldn't this allow you to skip all the ConvertPixels stuff?
Paige T -- You need to blit your sprites to the pictureSurface, not the windowSurface. If your source and destination surfaces are both 8-bit and the colorkey is set correctly you should not lose transparency in this blit.