Saturday, February 9, 2013

Planetary Annihilation Engine Architecture Update - Terrain Geometry

In this update I'm going to reveal some of the more technical details of the PA terrain system.  If you aren't technically inclined this may not be your cup of tea.

It's not often one gets to sit down and come up with a completely new system to solve a set of problems.  A lot of games use engines that are already built or are sequels built onto the same technology.  So it's not really that often in your career that you are able to do a clean sheet design.  In our case we already had some engine tech but most of the major systems in PA are new.  Our engine is designed more as a set of libraries than as a framework so it's possible to use it in different ways.  By my count this is about the 6th time in my career that I've embarked on a tech project this major over 20 years.  Of course I have way more experience this time around which makes it even more exciting.

The general process I like to follow in cases like this is simple.  First off we need to define our requirements.

What does this terrain system need to do?

  • Procedural Generation
  • Dynamic during gameplay which implies fast performance
  • Ability to match terrain in concept video by supporting hard edges
  • Originality to create a unique experience

Research


It's always been my goal for the terrain to be procedural.  There are just so many advantages and it doesn't preclude creating custom maps anyway as long as you give people a doorway into doing that.  Games like Minecraft I think have proven you can create interesting procedural stuff.  Because we want to be able to smash these guys up at runtime we also need the system to be relatively fast.  In addition an original looking system that can support some of the interesting conceptual stuff we have developed is important.  If you look at the concept video you can see some interesting hard edges and it sure doesn't look like a traditional height map based terrain engine.



After you've figured out your requirements you need to start considering different approaches and their pros and cons.  This is the time for original thinking and trying to look at the problem from different perspectives.  During this phase I read a ton of papers on all kinds of new techniques trying to get a handle on what the state of the art is in terrain.  

Since the goal here is pretty much to create sphere-like planets there is a kind of obvious first approach that a lot of engines take for planet geometry.  Basically they map a traditional terrain height map onto a sphere often using an exploded cube to make the mapping simpler.  So you basically have 6 maps, one on each face of a cube.  You then inflate the cube to be a sphere by normalizes all of the vertices and setting them at length r.   Then you simple change r at each vertex by adding the height map at that vertexes face coordinate from your cube mapping. By changing the height maps at runtime you can get dynamic terrain.

Unfortunately using the heightmap terrain has a lot of downsides.  For one it's difficult to get more geometry detail in places you want it for things like hard edges.  Overhangs aren't possible either.  I also find that most heightmap terrain looks very similar with a kind of weird "smooth" look to it.  You can extend this algorithm with various hacks by stitching meshes into it and other things like that.  Some full 3d displacement techniques can be used as well.  Overall though it's kind of a hacky data structure that I  didn't think was the best solution to our problems.  It just doesn't give the kind of flexibility in terrain creation and detail that I'm looking for.

Another really obvious technique for the terrain would be to use voxels, probably with a marching cube pass to turn it into a mesh at runtime.  Plenty of games use this technique and I definitely seriously considered it.  However, there are a few downsides.  Memory usage being one.  Another one is the terrain again has exactly uniform detail making it hard to do edges like we want.  To get good detail would require a fairly high resolution voxel field which wasn't going to scale well to some of the large planets we want to do.  Overall I think it's possible to make this work but it still didn't really make me happy.  Giving a voxel terrain interesting texture detail is also challenging because it tends to end up looking generic.

Generating the Geometry


As I was going through this process I regularly was sitting down with Steve our Art Director to go over different ideas on direction for this.  One day sitting at the white board with him I realized that what I really want is something like a "geometry displacement map" where I can deform the terrain but get the exact shape and edges I wanted instead of relying on manipulating an underlying height map that's at grid resolution.  As I thought more about this concept I realized something, what I really needed was some sort of Constructive Solid Geometry type solution.  By using CSG I can completely control the shape of the terrain.  I can dynamically add or remove pieces etc.

So what is CSG?  The basic idea is to use boolean operators at the geometry level.  Interestingly enough this system has quite a bit of history to it.  Back in the 90's a lot of game engines including Quake used CSG to construct their levels.  At the time id came up with the name "brush" to represent an element that goes into one of these levels.  In Quake all of the brushes would be booleaned together to create a final mesh.  This final mesh then became the actual level mesh (see my earlier article about Thred my CSG based Quake editor from 1996).

So what are these boolean operators and how do the work?  At the simplest level we define an inside and an outside for every mesh.  This means the mesh must be closed and sealed (manifold) before you can use it in a CSG operation.  For example a cube with all 6 faces.  If one of the faces was missing we would have a non-manifold mesh which isn't appropriate for CSG operations (although you can technically do them we want to be tight in the game for reasons I'll get into later).  In PA the basic operations we support are union and difference, although I prefer to just call them add and subtract.
subtracting the blue ball
adding the blue ball

In the formulation of CSG that I'm using for our terrain we simply have a list, in order, of brushes.   A brush is either add or subtract.  We simply iterate through the list in order applying the operations to build our mesh.

Now that we know we are using CSG operations we can start to formulate a plan to create a planet using this stuff.   So what's the process?

First off we create a base planet mesh.  In our case this is a sphere mesh that we create using code.  However, this base mesh could really be any shape you want and this mesh could potentially be modeled in a 3d package as there is a way to specify these directly.  This gives a nice back door to creating planets outside of my pipeline and getting them into the game for modders.  In addition you can have multiple base meshes if you want to.

Once we have the "base" mesh I distort it using simplex noise to generate some interesting height data.  In fact there are several phases here where I calculate a bunch of things like distance to water fields, height data etc.  This information is then used to generate the planetary biomes.  This process is controllable through the planet archetype spec which is externally supplied for each planet type.  Biomes are also fully externally configurable from a json text file.

Once we have the surface covered in different biomes we then use the biome specifications to start spreading brush geometry over the surface.  There are a bunch of properties which drive the placement of these.  Each one is defined as either an add or subtract operation.  So for example you can carve canyons using subtracts and add hills and mountains using add operation.  Each of these brushes pieces has it's own set of material information that's used to render it.  This source of texture information allows us to get really interesting detail that comes from the artist and is mapped directly. Being able to have the artist control this allows us to get a very specific look for any piece of the terrain.

There are a lot of interesting sub problems that start to crop up when you decide to map pre-made brushes to a sphere.  To combat the problem of the flat authored brush not mapping to the sphere we support several different types of projections.  These effectively wrap the brush geometry around the sphere and onto the terrain so that you can nicely control the final look.  There are several mapping options available but most of the time we project to the actual surface terrain at each vertex of the brush.  Height is handled such that the z=0 plane of the brush is mapped directly to the surface so you can control how far above or below ground that piece of the brush is.

Once we have the surface geometry properly laid out we simply perform the CSG operations that give us the final mesh.  An additional detail is that we break the world into plates each of which are processed separately. Doing it this way means we can locally rebuild the CSG at runtime when we add a new brush without having to modify the entire planet.  Only the plates effected by a particular brush need to be recalculated.  So for example a giant explosion that cuts a hole in the ground may completely remove some plates and partially update others.  This process is fairly slow but it's tractable to do it async from the rest of the sim and only swap it in once the heavy lifting has been done.

Below are some examples of the technique in action.  This isn't real game art, just example stuff that tries to get across the idea.

The planet is broken up into plates that extend to the center so they are manifold.   Here we are only displaying half the plates so that we can see the internal geometry which will never show up in the actual game.

Example of add brushes on the surface

Example of subtract brushes on the surface

Features


Once we have the base geometry calculated we go through a similar process to place down all of the biome "features".  Features are basically meshes that are interactive and aren't part of the actual terrain geometry.  For example trees, rocks, icebergs etc. that can be reclaimed and destroyed by weapon fire.  These use a similar placement strategy to the brushes and the setup and control is pretty much the same code.

Probably the major different besides them not modifying the underlying terrain mesh is that the engine is setup to have vast numbers of these using instancing.  This is how we get things like giant forest that can catch on fire etc.  Effectively these use pretty much the same pipeline as units.

Texturing

Now we have all of this great technology to create meshes, modify them at runtime etc we actually have to figure out how to render and texture them.  This is actually a fairly complex problem because the source meshes that go into building the terrain all have their own texturing which can be fairly complex.  Simply merging the meshes together and sorting by material is still going to give us a huge amount of render calls.  In addition it would be really nice to be able to do things like decals on the terrain for explosion marks, skirts around buildings or whatever else we feel like.  In previous games I used some runtime compositing techniques which I was never happy with.  But now we have a solution, Virtual Texturing!

So what is virtual texturing?  Probably the best explanation is Sean Barret's excellent sample implementation where he goes into great technical detail.  The simple explanation is that you create a unique parameterization of your mesh and map this into a 2D virtual space.  This gives every single piece of your mesh a unique set of pixels.  The GPU implementation then will request pieces of this texture space in page sized chunks which you then have to fill in.  In our case we dynamically generate the pages using the original geometry and texture mapping of the individual brushes as well as dynamic decals applied to the terrain.

Virtual texturing brings all kinds of goodness to the table. One obvious bit is now that we are using a single shader for the terrain we can render the entire thing in one render call without splitting it up.  It also puts a nice upper bound on how much video memory we use for the terrain.  It's also very scalable both upwards and downwards.  If you have a videocard with low memory you can turn down the texture detail and it will work with the only side effect being blurrier terrain.  If you have a ton of texture ram crank the detail up (of course there is a limit!).  The amount of space we need in the cache texture varies with the resolution of the screen and the chosen texel density on the terrain.

The ultimate result of this is that we get unique pixels on every surface of the planet which is a very good thing when we want to blow the hell out of it.

Mods

You can create your own archetypes, brushes, features etc. So new terrain types are fairly easy to add to the engine.  I'll be doing more posts about modding later but rest assured adding in new types of terrain is a design goal of the engine.  The art team is using the same tools that the end users will have for putting these terrains together.

Navigation

Finally there is the actual gameplay component to this.  The general idea here is that the mesh that's output from the geometry generation process becomes something we call a Nav Mesh.  Now think back to the fact that the output of our process is manifold and you can see how we have a very nice closed surface to walk around on and do collision detection with.  The nav mesh is a true 3d data structure that allows multiple layers, caves, overhands and whatever other geometry we want to create as long as it's manifold.  When the terrain gets modified we simply replace nodes in the navigation mesh and notify the units that the terrain has changed. We'll go into more details on this structure later but it's worth an entire post of it's own.  Note that since we already have a unique parameterization from the virtual texture that we can map a grid to this if necessary for doing things like flow fields.

Conclusion

When we set out to design the terrain system we had some big challenges in front of us.  I think we've found a fairly good compromise and at a minimum at least it's going to be different from other games.  This system meets our requirements and I hope it turns out as awesome to play with as I think it will.

For those wanting to discuss this I suggest you come over to uberent.com/forums where most of the active discussion about the game happens.  You can reach me on twitter @jmavor and obviously I read the comments here as well.

18 comments:

  1. this is amazing! A really brilliant solution. Can't wait to start ripping planets apart

    I've heard mention of major impacts releasing lava. Will you be doing a post about your solutions to ocean and lava flows, and their interactions, in the not-too-distant future?

    ReplyDelete
  2. It is really cool being able to see some of the thought processes and details like you did here. I hope you do more in the future.

    ReplyDelete
  3. Interesting read. But didn't RAGE use Virtual Texturing as well? And it had problems with loading/generating new textures fast enough. And in PA you could potentially instantaneously move the camera to a completely different location (e.g. another planet) - does your implementation hold up there?

    Anyway this sounds great. Looking forward to completely custom planets! (Of course, this would require the gravity and camera to be flexible enough as well - imagine a Moebius strip, for example.)

    ReplyDelete
  4. This is fascinating.

    I especially appreciate the amount of detail and thorough treatment you've given this. Posts like this give me warm fuzzies about the people involved in development of PA, and reinforce how happy I am that I kickstarted it.

    ReplyDelete
  5. Very interesting article.

    Do the "plates" allow you to easily split the task across multiple CPU-Cores?

    ReplyDelete
  6. I salute you for managing to make an exposition on tech almost completely understandable, and furthermore interesting and entertaining to a code-allergic art-brain =)

    I will continue dreaming of a voxel RTS. Tears can be dried.

    Looking forward to more, Mr. Mavor.

    ReplyDelete
  7. Great article. Thanks alot. :)

    ReplyDelete
  8. This is great! Thank you very much for this article. Wish I could work on such cool gamedev projects.

    ReplyDelete
  9. Thank you very much for the time and effort to create this article. I love reading about people doing what they love.

    ReplyDelete
  10. Wow, thanks to everyone for the comments. It's really motivating to hear all of this feedback.

    ReplyDelete
  11. I loved this article! Knowing the behind-the-scenes is great as a programmer, and perfect for thinking about how to mod the game =)

    ReplyDelete
  12. Cool article, that's interesting

    ReplyDelete
    Replies
    1. This comment has been removed by the author.

      Delete
    2. As Carmack once said, "knowledge builds on knowledge". It's funny how writing any piece of engine tech can still have you thinking back to work you did decades earlier.

      Delete
  13. Thank you for this detailed article. I really like the fact that you have chosen virtual texturing and CSG, these techs have great potential. I hope it will turn well without the huge issues of the Rage PC version with gigabytes of static data. Did you encounter any issue with drivers streaming the pages dynamically ?

    ReplyDelete
  14. Thanks a lot for this very detailed article. I did not catch up everything but i trust your uber Coding power john :P

    ReplyDelete
  15. Really interesting stuff! Initially I thought this would be way over my head as I've only really got experience modelling but you explained it so well I can actually relate to it.
    Question: when adding or subtracting objects, does that not add to vertex density? If so, does that mean that entire sections of a planet will be using a much higher density mesh and thus cause slowdown? Also, how does effectively knitting geometry to the surface of the planet allow for things like scaffolding or gantries? Does each object get assigned to a different planet vertex?
    These are probably more fundamental questions that I haven't grasped yet but its worth asking all the same.
    It seems sim city uses a similar method of attaching geometry to a surface judging from some of the glitches you can see in that game (although that's for a flat surface).

    ReplyDelete
  16. You are a credit to all programmers. This kind of post is so valuable that I cannot thank you enough for sharing your insights. I look forward to seeing more!

    ReplyDelete