Thursday, December 6, 2012

3D survival guide for starters #4, Light/AO/HeightMaps

Arh! A bit later than it should. And I don't have a good excuse really. Busy as usual, and forgot the clock. Or something. Doesn't matter, let's rush towards the last "Starters Guide" post, and smack your ears with a few more texturing techniques: Ambient Occlusion mapping and two old beautiful ladies, miss HeightMap and LightMap. The good news is that these are easier (to understand) than normalMapping. So lower your guard and relax.
Tears. One of my first lightMap generator results, including support for normalMapping.


HeightMaps
=====================================================
This is one of those ancient, multi-useful techniques that didn't die quite yet. If I were a 3D-programming teacher, I would first assign my students to make a terrain using heightMaps. And slap them with a ruler. The HeightMap image contains.... height, per pixel. White pixels indicate high regions, dark pixels stand for a low altitude. Basically the image below can be seen as a 3D image. The width and height stand for the X and Z axis. The pixel intensity for the Y(height) coordinate.

An old trick to render terrains was/is to create a grid of vertices. Like a lumberjack shirt. Throw this flat piece of lumberjack cloth over a heightMap, and it folds like a 3D terrain with hills and valleys. That's pretty much how games like Delta Force, Battlefield 1942 or Far Cry shaped their terrains. And asides from rendering, this image can be used to compute height at any given point for physics & collision detection as well. Pretty handy if you don't want your player fall through the floor. Another cool thing about heightMaps is that any idiot can easily alter the terrain. Just use a soft-brush in a painting program, and draw some smooth bumps. Load the image in your game, and voila, The Breasts of Sheba. In older games it was often possible to screw up the terrain by drawing in their map data files..
On the eight day, God created height.

Yet heightMaps aren't as popular anymore, as they have several issues. First, heightMaps provide only 1 height coordinate per grid point, So you can't make multi-store or overlapping geometry, such as a tunnel or a cliff hanging over your terrain. Second, your image resolution is limited as always. Let's say you draw a 1024 x 1024 image for a 4096 x 4096 meters (~16 km2) terrain. You can make your terrain grid as detailed as you want, but there is only height information for each (4096^2) / (1024^2) = 4^2 meters. Certainly not detailed enough to make a little crater or a manmade ditch. Games solved this problem a bit by chopping their terrains in a smaller bits, each with its own heightMap to allow more detail. More complex shapes such as steep cliffs, rocks, tunnels or manmade structures would be added on top as extra 3D models.

So with the ever increasing details and complexity, games switched to other methods. But don't just throw away your heightMaps yet! More advanced shader technologies made The Return of the HeightMap possible. To name a few:
- normalMap generators often use heightMaps to convert to a normalMap
- Entropy
- Parallax Mapping
Entropy? That's a tropical drink? Not really, but it is a cocktail for making your surfaces more realistic with rust, dirt, moss or layers caused by influences such as nature. Can't really explain it one word, but the way how a surface looks often depends on environmental factors. If you take a look at your old brick shed in the garden, you made notice that moss likes to settle in darker, moisture places. If it snows, snow will lay on the upwards facing parts of the bricks, not on the downside. Damaged parts are more common at the edges of a wall. Overall status and coloring of the wall may depend how much sun and rain it caught through the years. All these features are related to geological attributes, their location. by mixing textures and some Entropy logic, you can make your surfaces look more realistic by putting extra layers of rust, moss, worn paint, cracks, frost, or whatsoever on LOGICAL places.

If we want to blend in moss, we would like to know the darker parts of our geometry. And here, a heightMap can help a lot. For a brick wall or pavement, obviously the lower parts (dark pixels) usually indicate the gaps between the stones. So start with blending in moss at “low heights” first. The same trick can be used for fluids. If it rains, you first make the lowest parts look wet & reflective. This allows to gradually increase the water level without needing true 3D geometry. Pretty cool huh?
Like cheese grows between your toes first, this shader should blend in liquids at the lowest heights first.

Another technique you may have heard of, is Parallax mapping. I won't show you shader code or anything, but let's just tell what it does. As you can read in the third post, NormalMapping is a way to "fake" more detail and relief to your otherwise flat polygon surfaces. NormalMapping gives this illusion by altering the incoming light fluxes on a surface depending on the surface normals. However, if you look on a steep camera angle, you will notice the walls are still flat really. Parallax Mapping does some boob implants to fake things a bit further.

With parallax mapping, the surface still remains flat. But by using a heightMap, pixels can be shifted over the surface, depending on the viewing angle. This is cleverly done in such a way that it seems as if the surface has depth. Enhanced techniques even occlude pixels that are behind each other, or cast shadows internally. All thanks to heightMaps, that helps your shader tracing whether the eye or lamp can see/shine a pixel or not.
The lower part shows how the shifting works. Where you would normally see pixel X on a flat surface, now pixel Y gets shifted in front of it, because it's intersecting the view-ray.

As for implementation, it's just another image, using only 1 color channel. I usually stuff my height values into the normalMap. Although for expensive parallax mapping, it might be better in some cases to use a smaller separate image. Got to confirm that with the GPU performance guru's though.



LightMaps
=====================================================
Yes. Again lightMaps. I just like to write about them for some reason, and if you're not quite sure what these are, you should at least know about their existence. In post #2, I wrote how lighting works. Eh, basic lighting in a game I mean. To refresh your mind:
- For each light, find out which polygons, objects or pixels it *may* effect
- For each light, apply it on its polygons, objects, pixels, whatever
- Nowadays, shaders are mainly used to apply a light per pixel:
.....- Check the distance and directions between Lamp & Light
.....- Eventually check the direction between lamp, light & camera (for specular lighting)
.....- Eventually check with the help of shadowMaps if the pixel can be litten ("seen") by the lamp.
.....- Depending on these factors, compute light result, multiplied with the light color/intensity and pixel albedo/diffuse and specular properties.

That doesn't sound like too much work, but back in the old days, computers used leprechauns to calculate and carry bits from the processor to the monitor. If you were lucky, your computer had a "turbo!" button to increase the clock speed from 1 Hertz to 2 Hertz, whipping the little guys to run faster with those bits. Ok ok. The point is, computers weren't fast enough to perform the tasks above. Maybe for a few lights, but certainly not for a complex (indoor) scene, let alone realtime shadows. Oh, and neither did shaders exist btw. Although shaders aren;t 100% necessary to compute lighting, they sure made life easier, and the results 100 times better. Really.

Platform games like Super Mario World didn't use "real" lighting. At most some additive sprites or tricks to adjust the color-pallette to smoothly change the screencolors from bright to dark or something. Doom & Duke didn't give a crap about "real" lights either. How they made a flat building cast a shadow? Just by manually darkening polygons. When true 3D games like Quake or Halflife came, things got different though. id Software decided it was about time to make things a bit more realistic. Back then, gamers and reviewers were mostly impressed by the polygon techiques that allowed more complex environment architecture, and replaced the flat monster sprites and guns with real 3D models. But probably most of us didn't notice the complexity behind its lighting (me neither, I was still playing with barbies then). The lightMap was born, and still is an essential technique for game-engines now.

As said, computing light at a larger scale in 1996 was a no-go zone, let alone dealing with shadows or "GI", Global Illumination. Hence GI is still a pain in the ass 16 years later, but I'll get back on that later. Instead of blowing up the computer with an impossible task, they simply pre-computed the light in an "offline" process. In other words, while making their maps, a special tool would calculate the light affection on all (static) geometry. And since this happened offline, it didn't matter whether this calculation took 1 second or 1 day. Sorry Boss, the computer is processing a lightMap, can't do anything now. Anyway, the outcome of this process was an image (or maybe multiple) that contained the lighting radiance. Typically 1 pixel of that image represents a piece of surface (wall, floor, ceil, ...) of your level geometry. The actual color was the sum of all lights affecting that piece of surface. Notice that "light" is sort of abstract. You can receive light from a lamp or Sun, but also indirect light that bounced of another wall, or scatters from the sky/atmosphere. The huge advantage of light was -and still is!- that you could make it as advanced as you would like. Time was no factor.
A "LightMap Generator" basically does the following things (for you):
1- Make 1 or more relative big empty images (not too big back then though!)
2- Figure out how to map(unwrap) the static geometry of a level (static means non-moving, thus monsters, healthkits and decoration stuff excluded) onto those image(s).
3- Each wall, floor, stair, or whatsoever would gets its own unique little place on that image. Depending on the image resolution, size of the surface, and available space, a wall would get 10x3 pixels for example.
4- For each surface "patch" (pixel), check which light emitting things it can see. Lamps, sky, sun, emissive surfaces, radioactive goop. Anything that shines or bounces light.
5- Sum all the incoming light and average it. Store color in the image.
6- Eventually repeat step 4 and 5 a few times. Apply the lightMap from the previous iteration to simulate indirect lighting.
7- Store the image on disc

The hard part is inside unwrapping the scene geometry efficiently on a 2D canvas, and of course the lighting itself (step 4) itself. I won't go into it, and I should notice that many 3D programs or game engines have LightMap generators (or "light baking" tools), so you don't have to care about the internals. The point is that you understand the benefits of using a pre-generated image. Using a LightMap is a cheap as a crack-ho, while achieving realistic results (if the generator did a good job). In your game, all you have to do is:
* Store secundary texture coordinates in your level geometry. Those coordinates were generated by the lightMap tool in step 2/3.
* Render your geometry as usual, using the primary texture coordinates to map a diffuseTexture on your walls, floors, ceils, ...
* Read a pixel from the LightMap using your secundary texture coordinates. Multiply this color with your diffuseMap color. Done.

It isn't all flowers and sunshine though. If you read about the capabilities of a modern game engine, they are probably ashamed to admit they are still using LightMaps for some parts. Why is that? Well, the first and major problem is that you can't just rebuild your lightMap in case something changed in the scene. A wall collapsed, a door opened, the sun went down, you shot a lamp. Just some daily cases that would alter the light in your scene. If each change would issue a 2 minute during lightmap rebuild, your game becomes unplayable obviously. And there is more to complain. I wrote the word "static geometry" several times. Ok, but how about the dynamic objects? Monsters, barrels, boxes, guns, vehicles? Simple, you can't use a lightMap on them. In games, these objects would fall back on a simplified "overall" light color used inside a room or certain area. Yet another reason to dislike lightMaps, is that you can't naturally use normalMapping. Why? Because normalMapping needs to know from which direction the light came. We store an average incoming lightflux in our lightMaps, but we don't know where it came from. Yet, Valve (Halflife2) found a trick to achieve normalMapping sort of: they made 3 lightMaps instead of 1. Each map contained light coming from a global direction, so the surface pixels could vary between those 3 maps based on their (per pixel) normal. That technique was called "Radiosity NormalMapping" btw.

Wait, I have one more reason. The sharpness. As said, the generator needs to unwrap the entire scene on a 2D image canvas. Even when having a huge texture, your surfaces still won't get that many pixels. That explains blocky edged shadows, or light leaks. Even in more modern engines like UDK3.
No sir, I don't like it.

Well, 16 years ago we could chose between either nothing or a lightMap with its shortcomings. People never heard of normalMapping, and the blocky shadows weren't spotted between all other blocky low-poly pixelated shit. In other words, good-enough. But in the 21th century, LightMaps were pushed away. Faster hardware and shaders allowed to compute (direct) lighting realtime. So hybrids appeared. Partially lightMapped, partially realtime lights. Farcry is a good example of that. Doom3 even discarded lightMaps entirely...

To get punished remorseless. Yes, was less advanced but still looked more photo realistic. Thanks to their pre-baked cheap ass lightMaps. Despite a lot of great things, Doom3 (and Quake4) didn't exactly look realistic due their pitch black area's. Everything not directly litten by a lamp simply appeared black. Why did id do that?! Well, let's bring some nuance. Black area's won't quickly appear here on earth, because light always gets reflected by surfaces. See here, "Global Illumination". But the id guys simply couldn't compute GI back then. Not in realtime. Hell, we still can't properly do that, though promising techniques are closing in. So, they just forgot about the non-litten area's. In the sci-fi Mars setting, it was sort of acceptable. But their way of rendering things certainly wasn't useful for more realistic "earth" or outdoor scenes. It explains the lineup using idTech4 (Doom3 engine): Doom3, Quake4, Prey. All Sci-fi space games.
Due the complexity, most games store GI in lightMaps. But Doom3 didn’t use GI at all…

What they could have done, and should have done, is using a good old LightMap (they practically invented themselves with quake!) for the indirect and/or static lights. Of course, that also has shortcomings, but Hybrid engines like UDK (and I believe Crysis1 as well) the best of both, is still the way to go as we speak. Eventually, we will kill and burry LightMaps for once and all as soon as we get a realtime GI technique that is fast, accurate, scalable and beautifull at the same time. Implementing realtime GI is sort of a nerd-sadomachism. Telling you got realtime GI is cool, but the good old LightMap still looks better and is a billion times faster for sure. Although... Crassin's Voxel Cone Tracing comes close (but not without issues)...



Ambient Occlusion (AO) Maps
=====================================================
Now that you know everything about lightMaps, let's give you one more modern technique that spied on this old lightMap grandmother. "Ambient Occlusion", or "AO" maps. Just another texture that can be mapped on your world geometry like a lightMap. OR on a model like a gun or monster using in the same way you would map any other texture on them. AO tells how much your pixel is occluded from global, overall ambient light. It doesn't really look if & where there are lightsources really. It just checks how many surrounding surfaces are blocking *potentially* incoming light from any direction. It's pretty simple, put a pile of blocks on your table, and you will notice inner blocks get darker once you place more blocks around them. Because the outer blocks occlude light coming from a lamp, floor, ceiling, sky, flames, or whatever source.
Since GI or Ambient lighting is already a giant fake in most game engines, the unlitten parts of a scene (thus not directly affected by any light) often look flat. As if someone peed a single color all over your surfaces. NormalMapping doesn't work here either, since we don't have information about incoming light fluxes here (unless you do something like Valves Radiosity NormalMapping). Ambient Occlusion eases the pain a bit by darkening corners and gaps further, giving a less "flat" look, making things a bit more realistic.

Yes, AO is one of those half-fake lighting tricks. The result of what you see in reality is not based on "occlusion factors", but how light photons manage to reach a particular surface directly or by bouncing off on other surfaces. AO skips the complex math and only approximates the incoming light for a given piece of surface. Doing it correctly requires to let the surface send out a bunch of rays, and see if & where they collide. The more rays collide and the shorter their travel distances, the more occlusion. This is stored as a single, simple value. Typically AO maps are grayscaled whitish images, with darker parts in the gaps, corners and folds of your model. Since a single color channel is enough, you can store the Ambient Occlusion factor inside another image, such as the DiffuseMap.


AO construction
There are 3 ways of computing AO. We can do it realtime with SSAO (Screen Space Ambient Occlusion). The idea is the same, although SSAO is even faker as it relies on crude approximations by randomly sampling close neighbor pixels that may be occluding you. Testing correctly how much each pixel on your screen would be occluded -also by objects further away- would be too expensive. SSAO has the advantage that it will update dynamically on moving or animated objects, you don't have to pre-process anything. The disadvantage is that it tends to create artifacts such as a white or black "halo's" around objects, and it only works on short distances. Bad implementations of SSAO look more like a Edge Detector effect rather than "AO".

Another way is to compute it mathe-magically. This works if you only have 1 or 2 dominant lightsources (sun). You can "guess" how much light a pixel catches by looking at its surface normal (facing downwards = catch light from the floor, upwards = catch light from the sky/sun) and some environment information. Fake, but dynamic & cheap.

The third method, as described above, is by baking AO into a texture. Like you would bake light into a lightMap. Again, programs such as Maya or Max have tools to compute these images for you. Once made, you can apply AO on your world or objects at an extreme low cost. The downside, as with lightMaps, is that the AO doesn't adjust itself if the environment changes. Yet AO-maps are a bit more flexible than lightMaps (so don't confuse them!). Occlusion Factors are light independant. Directions or colors don't matter. So you can also bake a map on a model like a gun, and transport it through many rooms with different lighting setups. The AO in this gun image only contains its internal occlusion, like gaps, screws or concave parts being a bit darker. Also for static worlds, you can change the overall ambient color and multiply it with a pre-generated AO map. Not sure, but I can imagine a world like the Grand Theft Auto cities contain pre-computed AO-like maps (or stored per vertex maybe) that tell how much a piece of pavement gets occluded by surrounding buildings. Then multiply this occlusion factor with an overall ambient color(s) that depend on the daytime & weather.



Oops, the post got a bit long again. Well, didn't want to split up again and have yet another post. Want to write about Compute Shaders, so I had to finish this one. I hope it was readable and that you learned something!

3 comments:

  1. This comment has been removed by a blog administrator.

    ReplyDelete
  2. Very nice guide Spek. I found your guide via a post on Gamedev.net.
    I hope there will be more of them.

    =)

    ReplyDelete
  3. Oh I just realized you might not be Spek from Gamedev.net as it is not a one-guy project but awesome articles nonetheless :)

    ReplyDelete