Sunday, October 23, 2016

The Boyscouts



Back in the day, I wasn’t a very good pathfinder at the “Jagers” (our local cheap variant of the Scouts). Map reading? Fooling around (and drinking beer or rolling cigarettes on a later age) you mean. Usually we took the third turn left instead of right, got lost, and made it a six hour walk instead of four. But much fun we had, sneaking through ditches and backyards, getting chased by cows or talking about boobs in the woods. And always bringing back a surprise for mom… mud-caked-cloth.

I wish our kids will join the Scouts or alikes one day, instead of wasting all their time on Youtube. But it seems the oldest girl is more into girl things, like flying on unicorns. We’ll see. Until then, my pathfinding skills are used on Tower22 A.I.

Pretty much every game with foes + a world more complex than a simple side-scroller, requires pathfinding. The word is self-explanatory, but still I’d like to point out there is a strict difference between deciding where to go, and figuring out how to get there. Pathfinding techniques usually focus on the latter, although the decision-part is maybe at least as important in order to make your foes smart, tactical or humanlike. Anyhow, pathfinding usually consists of path-data and a (fast) algorithm that calculates the shortest or “best-quality” path between point A and B. Like configuring your car navigator.
Now the real question is; man or woman?



I won’t dive into algorithms such as A* (yes, A-star), a very common pathfinding algorithm. Let’s just chat about what I did last summer, last week I mean; making a “Navigation Mesh” for the playable demo world.

Any pathfinding algorithm will need data to work with. Like the Boyscouts had their map (hm, map?). In a 3D game, we have world-geometry, but it’s too complex to directly use. World Geometry is basically a bunch of polygons, which could be anything really. Road, pavement, but also walls, roofs, pipes, windows, wires, chimneys, lava-pits, horse manure, and so on. Polygons too small, turned upside down, or made of nuclear acids aren’t suitable for walking. They just litter our algorithm with extra load. Neither do decorative polygons, somewhere in a far background. What you would need is an extra “yesYouCanWalkOnit!” property or something… letting the artist mark “floor” polygons you can walk on, in the editor.

That still stinks though. Games tend to tessellate their geometry in smaller chunks. A soccer field might be made of 100 x 50 quads, while just a single rectangle is enough for pathfinding purposes maybe. Pathfinding algorithms are pretty heavy shit, in calculation terms. Less connections / cells / polygons / grid-points = faster results. So you really want to filter our noise. When the Boyscouts are reading their map (IF they are reading their map…), they don’t want to know the location of each manhole, lamppost, bench, trashbin, dogturd, or liquor store… I think.

What you want, is a simplified version of the world, the (walkable) floor shaped out of as little polygons as possible. Engines like Unreal offer tools to create those meshes for you… How does that work?
T22 demo floor has about 21.000 triangles. The simplified mesh about 1.500. Notice the NavMesh doesn't have to follow the exact stairs. And also notice not every floor polygon is necessarily walkable. Ceiling too low, obstacles placed, or the NPC jusn't not having any business there.

NavigationMesh generator: Auto versus Handmade
First of all, they define a boundary-box where to create that navigation mesh. Anything outside that box is skipped right away. Perfect to get rid of backgrounds or stuff that might be walkable, but isn’t always useful (like roofs on houses). Next, if we forget about Spiderman for a second, you could help a hand by filtering out any polygon that is NOT facing upwards (thus potential ceilings / walls). Then maybe have a look at its material. If it says “boiling tar”, you probably want to skip it as well.

Still doesn’t mean those left-over polygons are actually walkable though. If your monster weighs 400 kg, chances are he may not fit through a normal door portal, unless he’s made out of jelly. Basically the floor polygons need to shrink a bit, away from the walls, and also testing the height. If your guy is 2 meters tall and can’t dug, any polygon with a low ceiling will definitely suck.

I’m not sure if those auto-generate algorithms are even using the existing (floor) geometry like that though. I’m more thinking in terms of floodfilling; make a (3D) grid inside an artist defined bounding box. Then click a piece of floor (“inject”) has is walkable, and check all its neighbour cells:
·         Put a cylinder there. Dimensions are based on NPC radius / height parameters.
·         Check what surrounding polygons it intersects?
o   Does it intersect “floor” polygons (upwards facing)? Good.
o   Does it intersect non-“floor” polygons (ceil, wall, obstacles)? Not good.
·         If passed, repeat the check for neighbours

That’s the easy part. More tricky is to generate an actual, optimized, mesh out of those cells, which basically represent a voxelized version of your map-floor. And no, I didn’t figure out how to do that (yet).


Not being blessed with grandiose math skills, and seeing some other possible issues, I decided to go for a good old hand-made NavigationMesh instead. Issues? Well, what if we do have Spiderman? Tower22 dudes may not necessarily stick to the floor. Hell, some may even cheat and fly through walls, or “warp” to certain spots. Second issue is that it’s a bit hard to define a bounding box and generate a mesh within that area. Tower22 has a roaming world, the navigationMesh is much bigger than the actual loaded parts of the world. Unless you generate multiple smaller meshes, and weld them together. But again, it introduces some tricky bits.

And there is something else. In previous NavigationMeshes (made manually for the older engine), I had some more information stored in the mesh. Like the type of floor. Pedestrians can walk the highway for sure, but they should avoid it as much as possible. To put it more abstract, polygons need some sort of QualityRating, which is relative to its target audience. Pigs prefer mud, cars asphalt, and Kanye West Gayfish people water.



Battlefield
I can still hear you thinking, an auto-generated mesh could do that as well right? Right, but here is yet another excuse; storing Battle-Data in the NavMesh. Now the irony is that Tower22 dudes don’t fire machine guns, toss grenades over walls, and run into cover. So I could do myself a favour and NOT bother all of this. But it’s just to cool to not implement, and actually it does give some interesting A.I. capabilities to even dumb guys like my T22 scum.

Like pointed out at the top, there is pathfinding and decision making. You can give the Boy-Scouts a map, but without a clear goal, they still don’t know what to do. Get back home, go to the liquor store, find a bench and smoke weed, whatever. Practical, humanlike tasks. Sims with blatter problems will look for toilets. Soldiers under fire will look for cover. Or “assault-spots” with sight on the target, preferably still as much covered as possible. Tower22 monsters will… I don’t know what monsters do. Play billiards, read the newspaper, find Boy-Scouts to eat them, I don’t know.

Anyhow, again just a map doesn’t help, other than you could have objects with a special tag, like “isFood, isMedkit, isToilet, isBed, …”. But to make your NPC’s feel natural, they need some more advanced tasks, requiring knowledge of their surroundings. One common way is to provide (artist-placed) Info-Nodes. In Tower22, I can place nodes and tell what we can do here. Restroom, kitchen, workplace, et cetera. Then NPC’s can search for such nodes based on their needs, a certain range, randomness, or a pre-programmed schedule.
Rather than just randomly ghouling around, I made a bunch of "interesting locations" for my monstrous friends. Depending on the context, they can eventually search more specfically, filtering out nodes based on groups, required skills or type.



Nice & Simple, but also predictable. If you pre-define cover/assault/sniper nodes, you will know where the Bots will hide or go after a few rounds. Unless you place many, many nodes.

Now here is something the makers of Killzone did in their world, creating a somewhat more dynamic solution. A lot of nodes are placed, and each node measures global “sight” distances in radial segments. So you can quickly tell NodeX has proper sight on the North, and is covered from the East. By measuring on both gun and ballsac height, you can also tell if your lower body is covered yes or no. So if we know the enemie(s) are somewhere North, and you’re a fanatic, you’ll search for nodes with sight on the north, but coverage from other (potential threat) directions as much as possible.

 Based on the node data, yellow-badguy distance is larger than we can see from there, meaning we can't him, nor can he hit us. Be aware the data is somewhat global/inaccurate though (just 8 distances stored here). More slices provides better data, but also increases memory and pathfinding complexity.



So I figured, why not automatically generating such a node at each center- and/or corner of the NavigationMesh? Should offer a rich set of BattleData. And not just that, it also provides tactical pathfinding information –and now it gets interesting for T22!. If you are busy saving private Ryan, you should know to get off the beaches ASAP, as well as to stay out of lines of fire as much as possible. While running a (A*) pathfinding algorithm, you can check your visibility on each node, and make a balance between shortest / most covered route. If you plan to flank your enemy, you’ll need a path that is invisible for your primary target, and finally offers sight on its flank, at a certain preferred distance. If you're a Ninja, you may prefer the shady route. If you store “lightFlux” in your nodes, and also avoid open-spaces as much as possible, you’ll be picking a path that is not necessarily the shortest, but gives you stealth advantages.
 
So… what’s the problem with auto-generated meshes then? The lack of tessellation. Yeah, first I said NavMeshes should have as little polygons as possible –which is still true. But at “tactical” spots, where light/cover/sight varies, you want additional polygons so you can quickly hop between a wall(cover) or window(fire). T-junctions, windows, doors, spots with lamps – all interesting places that may need a few more polygons to get more accurate BattleData about your surroundings. Or if your NPC likes to hug walls, he can follow polygons at the corridor sides.
We placed two "Cover" polygons just around the corner. An (auto) optimized NaviMesh wouldn't have those.


So, summing that up, I decided to just manually make the NavMesh. Which sucks, but doesn’t take many hours either. It’s doable, unless you have “Just Cause” sized levels. To help myself, I made an exporter in the Map Editor that filters out (potential) floor polygons only, giving me a background to work on. The NavMesh itself is just a 3D model, and I’ll be using the polygon names to tell what they are: Floor, Stairs, LeapOverWall, Water, GhostCheatingThroughWall, and so on. Then when importing the mesh, additional BattleData can be generated. Its nodes would be placed on NavMesh polygon centres and corners, giving a pretty detailed datagrid –but only dense at interesting points. The Boyscouts would be proud on me.

No comments:

Post a Comment