RainSaver is a screensaver for Windows that I made, which warps the desktop as if it were a liquid with raindrops hitting the surface. Here’s a little bit more in-depth information about how the tech behind that works. I used OpenGL for this project, so everything will be from that perspective, but I imagine the same techniques could also be applied in a Direct3D approach.
The first step is to grab an image of the screen to load into video memory as an OpenGL texture, which I won’t go into a whole lot of detail on, except to say that it involves a handy function called BitBlt. Next, I generate a “drop” texture. The drop texture is basically a normal map, where different colors indicate different normals. These colors are later used in the fragment shader to decide how much, and in which direction, to warp the final image. To actually generate it, I set the colors based on sine waves of the coordinates of the pixel relative to the center of the texture, and the alpha based on a sine wave of the distance from the center, setting negative alpha values to 0. It ends up looking a little like this:
// for each pixel in the 256x256 buffer dist = sqrt((float)((y-128)*(y-128) + (x-128)*(x-128))); dropBuffer[y][x] = 128+(x-128)*(sin(dist/2.0f)); dropBuffer[y][x] = 128+(y-128)*(sin(dist/2.0f)); dropBuffer[y][x] = 255; // remove inner ring(s) if (dist < 90) dropBuffer[y][x] = 0; else dropBuffer[y][x] = pow(MAX(0.0f, sin(dist/8.0f)), 12)*255.0f;
There’s also a wavy texture that is generated from cosine and sine waves of x and y coordinates, respectively, and smoothed noise for alpha values.
New raindrops are randomly created every so often, and existing ones expand over time. All existing drops are drawn into a frame buffer object (FBO), along with the wavy texture over the whole screen. Finally, a full screen quad is drawn with multitexturing, using the desktop texture and the FBO distortion texture, and the fragment shader takes it from there, using the final colors in the distortion texture as normals to determine distortion and shading. The GLSL for the warping alters the texture coordinates, like so:
vec2 coords = gl_TexCoord.st; coords.s = coords.s + (change.r*2.0 - 1.0)/40.0; coords.t = coords.t + (change.g*2.0 - 1.0)/40.0;
I’ve been talking a lot about patches from the Quake 3 BSP format for curved surfaces lately, so I thought I’d delve into some of the inner workings, including the math for generating them from control points. One resource that I have found to be very helpful in all things relating to the Q3BSP file format is Kekoa Proudfoot’s Unofficial Quake 3 Map Specs. Gold mine. The exact math for calculating vertices from the control points is not in there, but quadratic Bezier calculations can be Google’d.
Each patch has a set of subpatches, and each subpatch is represented by 3×3 control points. In the file, points are stored such that adjacent sets share a row of control points. Of course, it is also necessary to allocate enough memory for the number of vertices you want, depending on your target level of detail. When we have the subpatch and control points that we want, now comes the magical math. Cycling through the grid of vertices, calculate the points in the subcurve based on where you are across the 3×3 control points during the loops. First calculate three temporary vertexes using three control points each, then calculate the final vertex based on those three. The math for calculating a bezier value from three others goes something like this:
out = a*(1.0-frac)*(1.0-frac) + b*2.0*frac*(1.0-frac) + c*frac*frac;
‘frac’ would be a floating point value from 0.0 to 1.0 representing how far along the curve to end up – for example, if given 0.0, it would just return the value given by a, and given 1.0, it would return the value of c. These calculations would be repeated for all three values for a point, and the same calculations are used for position, texture coordinates, normals, and colors.
As an added bonus, I’ll say a little something about vertex buffer objects (VBOs) in OpenGL. Basically, VBOs allow you to save vertex information on the GPU, so that you don’t have to send it from the CPU every time you want to render something, which often affords a nice increase in speed. I recently implemented these for world rendering in Scorch Marks. At least in OpenGL, they are wonderfully easy to implement into an existing codebase. Basically just load the information into the buffers, change the targets for your vertex arrays, and you’re done. Of course, this could easily be tied into an implementation of Q3BSP patches.
Scorch Marks has been getting some more graphics upgrades, including more work with GLSL. It took some time to implement all of the facets that are required to make this work acceptably, but I’m fairly happy with the result so far. This time my efforts with GLSL focused on world and model rendering rather than post-processing. Observe:
Diffuse lighting (regular texture and light colors) and specular highlights (extra shine at certain angles) are in. At least for now, I’m only using specular maps, like I had originally planned. I find that specular maps are reasonably easy to implement on top of regular GLSL diffuse lighting, and they add a lot of detail. In any case, normal maps are a low priority, behind such important additions as gameplay. By the way, since this screenshot was taken, I have made huge progress on that level past the blackness to the right. More on that later on.
Light blooms are an interesting beast. I’m sure you’ve seen them in a lot of games over the past few years: the brightest lights becoming even brighter, and blurring out from the source a little, to impart to the player an impression of glare, as can be witnessed by the natural eye. In addition to a number of styles of bloom, from a slight change to an enormous blur, there are also many methods for implementing it.
I messed around with a few ideas for types of implementations for light blooms in Scorch Marks. I’m sure they could use some tweaking, but I got some good results for a GLSL path and for a fixed-function OpenGL path. I’m still not sure I’ve achieved my end result yet, but I’m reasonably happy with it for now. The GLSL, which gives far more control, is seen below. The bloom is increased a bit for this shot, to more obviously display the effect.