Monday, September 2, 2013

PAX 2013

The last day of PAX is here.  This is the first time it's been four days and it's just crazy exhausting.  I'll be hanging at our booth for most of the day today.  I just got back from Gamescom so adding PAX on top of that makes me feel like I'm stuck in a time loop of game shows.  So swing by and say hi if you are attending.

Thursday, August 8, 2013

Slow blogging

Damn, I've been really lax in updating this blog lately.  Sorry about that buy building a game takes most of my time and energy at the moment.  Usually I have time to wax poetically about whatever stuff is on my mind.

Wednesday, July 10, 2013

More planet generation stuff

If you want more followup on some of the problems in planet generation Allen Chou has written a nice blog post about it.

Saturday, June 8, 2013

Annihilation alpha launched!

So we finished the Kickstarter campaign in September.  I spent October doing technical research for the engine and hiring more team members.  We really got rolling in November.  It's now June and we've finally released the alpha version of the game.  To me this feels fast.

There is still a lot of work left to do on the game but you can see the direction now.  Thanks again to everyone that has supported the project.  Without you there would be no game.

Friday, May 3, 2013

Pew! Pew! Pew!

Here is a link to our latest live stream showing some more of the game play.

Wednesday, April 24, 2013

Bigger planets?

Some players seem to really want big massive planets in the game.  I think they have their place but I think it's likely smaller planets will be more fun in most cases.  Regardless here is a screenshot I took out of the planet builder this morning.

This planet has a radius of 3000m or so which gives it a surface area of 113097000 m2.

Decent sized planet

Friday, April 19, 2013

Pssst? Wanna see some live gamplay?

Here is the livestream we did today showing off PA for the first time in public.

Don't forget to check out our replay / casting system.  We are trying to really push the envelope.

Friday, March 22, 2013

Live stream with more tech details

I wanted to drop a link to our last live stream where we go into some more details on the planet generation and the navigation system.

Thursday, March 7, 2013

The Outland Games

We released our first iOS game today.

I'm really happy at how this turned out.  It's really a gorgeous little game with some seriously great personality.  It's also running a version of the PA engine...

Saturday, March 2, 2013

UberRay - Uber's real-time raytracing API

We had the opportunity a few years ago to do some cutting edge implementation work on real-time raytracing.  The idea was to solve the practical problems related to making a commercially viable ray tracer for actual games.  The goal was to release a fully raytraced game as well as the UberRay API itself.  Think about it as OpenGL or DirectX but for raytracing specifically. This way other people could have ray traced version of their games without worrying about the underlying implementation.

A shot from our renderer

Now, first off, I'm not saying that ray-tracing is the holy grail.  I've come to believe that it has a place in graphics but that hybrid techniques may be more the path we directly take.  I believe that, as always, we will hack together a variety of techniques for any particular use case.

That being said I'm very concerned about the amount of artist time it takes to create content.  How does this relate?  When creating a modern engine it can be difficult to make everything work together in every case.  We have issues with sorting, issues with techniques that take full screen passes and don't combine well etc.  Most of the time we hack something together that works "good enough" and tell the artists to be careful about building certain types of things that break.  This is an inherent inefficiency and it also means that artists have to understand technical limitations of the tools. What if we had an engine where every shader could play nicely together without worrying about composition as much?  In other words we make the rendering regular so that it's more consistent and easier to work with.  Can we get there with rasterization?  Probably but I do think raytracing may become the model just because it's regular.  Keep in mind that today performance is a top priority but as computing power increases it may become less of an issue.
Funhouse mirrors!

So let's talk about so high level architectural details.  First off ray tracing is a scene level algorithm.  The program needs to know about the current state of the entire scene at once.  Normal graphics API's like OpenGL don't include scene level functions, they are lower level.  You typically combine render calls together in a set of frame buffers and build up the image directly out of these calls.

With ray tracing it's more like:
- Update the scene graph with current information about the scene
- build the acceleration data structures
- trace rays from a camera into a specific buffer
- any post processing we want to do including things like AA and tone mapping

Whereas a rasterize looks like:
- On the CPU figure out what meshes we want to draw in which order
- make draw calls to render the meshes, culling them and potentially multi-passing them
- could be doing lights deferred or not, most of the rendering code is similar
- then when we are finished do a bunch of post processing including again resolving AA and tone mapping

The main difference between them is really what order you are iterating the data in.  The ray tracer iterates over the pixels and asks what the cover.  The rasterizer iterates over the set of meshes and triangles and calculates which pixel they overlap.

Another angle

So it's obvious that UberRay has to be a scene level graph.  I hesitate to just call it a scene graph because it's a very simple one.  For the most part it was simply a list of mesh instances, shaders and lights.  However, as you'll see it did start to get a bit more complex by supporting things like volumetric effect, proper depth of field, particle systems, mathematically defined surfaces etc.

One of the largest areas that we spent time was on the shader system.  William created an amazing interpreted shader language that could be used to create really complex modern era shaders.  The language also had a code gen back-end that could target SSE.  One of the base level shader ops was simply spawn another ray and return it's result so you could do things like reflections.  A material was actually made up of multiple shaders.  For example you would have a shader that directly computes the frame buffer color but you also have another shader that describes the transmission property of the material.  Emission was another shader.  We could do real refraction and even chromatic aberration by tracing three rays separately.

The light on his face is trasmission from a white light going through a film.

Anyway we never got to actually release the thing so I wanted to talk about it a little bit and show some old screen shots.  Hopefully we can revive this project down the road when appropriate hardware is available.  I don't think it will be that long until someone does a raytraced game just to get a different look going.

At this point the code is just sitting there doing nothing and has been for a couple of years.  C'est la vie.

Saturday, February 9, 2013

Planetary Annihilation Engine Architecture Update - Terrain Geometry

In this update I'm going to reveal some of the more technical details of the PA terrain system.  If you aren't technically inclined this may not be your cup of tea.

It's not often one gets to sit down and come up with a completely new system to solve a set of problems.  A lot of games use engines that are already built or are sequels built onto the same technology.  So it's not really that often in your career that you are able to do a clean sheet design.  In our case we already had some engine tech but most of the major systems in PA are new.  Our engine is designed more as a set of libraries than as a framework so it's possible to use it in different ways.  By my count this is about the 6th time in my career that I've embarked on a tech project this major over 20 years.  Of course I have way more experience this time around which makes it even more exciting.

The general process I like to follow in cases like this is simple.  First off we need to define our requirements.

What does this terrain system need to do?

  • Procedural Generation
  • Dynamic during gameplay which implies fast performance
  • Ability to match terrain in concept video by supporting hard edges
  • Originality to create a unique experience


It's always been my goal for the terrain to be procedural.  There are just so many advantages and it doesn't preclude creating custom maps anyway as long as you give people a doorway into doing that.  Games like Minecraft I think have proven you can create interesting procedural stuff.  Because we want to be able to smash these guys up at runtime we also need the system to be relatively fast.  In addition an original looking system that can support some of the interesting conceptual stuff we have developed is important.  If you look at the concept video you can see some interesting hard edges and it sure doesn't look like a traditional height map based terrain engine.

After you've figured out your requirements you need to start considering different approaches and their pros and cons.  This is the time for original thinking and trying to look at the problem from different perspectives.  During this phase I read a ton of papers on all kinds of new techniques trying to get a handle on what the state of the art is in terrain.  

Since the goal here is pretty much to create sphere-like planets there is a kind of obvious first approach that a lot of engines take for planet geometry.  Basically they map a traditional terrain height map onto a sphere often using an exploded cube to make the mapping simpler.  So you basically have 6 maps, one on each face of a cube.  You then inflate the cube to be a sphere by normalizes all of the vertices and setting them at length r.   Then you simple change r at each vertex by adding the height map at that vertexes face coordinate from your cube mapping. By changing the height maps at runtime you can get dynamic terrain.

Unfortunately using the heightmap terrain has a lot of downsides.  For one it's difficult to get more geometry detail in places you want it for things like hard edges.  Overhangs aren't possible either.  I also find that most heightmap terrain looks very similar with a kind of weird "smooth" look to it.  You can extend this algorithm with various hacks by stitching meshes into it and other things like that.  Some full 3d displacement techniques can be used as well.  Overall though it's kind of a hacky data structure that I  didn't think was the best solution to our problems.  It just doesn't give the kind of flexibility in terrain creation and detail that I'm looking for.

Another really obvious technique for the terrain would be to use voxels, probably with a marching cube pass to turn it into a mesh at runtime.  Plenty of games use this technique and I definitely seriously considered it.  However, there are a few downsides.  Memory usage being one.  Another one is the terrain again has exactly uniform detail making it hard to do edges like we want.  To get good detail would require a fairly high resolution voxel field which wasn't going to scale well to some of the large planets we want to do.  Overall I think it's possible to make this work but it still didn't really make me happy.  Giving a voxel terrain interesting texture detail is also challenging because it tends to end up looking generic.

Generating the Geometry

As I was going through this process I regularly was sitting down with Steve our Art Director to go over different ideas on direction for this.  One day sitting at the white board with him I realized that what I really want is something like a "geometry displacement map" where I can deform the terrain but get the exact shape and edges I wanted instead of relying on manipulating an underlying height map that's at grid resolution.  As I thought more about this concept I realized something, what I really needed was some sort of Constructive Solid Geometry type solution.  By using CSG I can completely control the shape of the terrain.  I can dynamically add or remove pieces etc.

So what is CSG?  The basic idea is to use boolean operators at the geometry level.  Interestingly enough this system has quite a bit of history to it.  Back in the 90's a lot of game engines including Quake used CSG to construct their levels.  At the time id came up with the name "brush" to represent an element that goes into one of these levels.  In Quake all of the brushes would be booleaned together to create a final mesh.  This final mesh then became the actual level mesh (see my earlier article about Thred my CSG based Quake editor from 1996).

So what are these boolean operators and how do the work?  At the simplest level we define an inside and an outside for every mesh.  This means the mesh must be closed and sealed (manifold) before you can use it in a CSG operation.  For example a cube with all 6 faces.  If one of the faces was missing we would have a non-manifold mesh which isn't appropriate for CSG operations (although you can technically do them we want to be tight in the game for reasons I'll get into later).  In PA the basic operations we support are union and difference, although I prefer to just call them add and subtract.
subtracting the blue ball
adding the blue ball

In the formulation of CSG that I'm using for our terrain we simply have a list, in order, of brushes.   A brush is either add or subtract.  We simply iterate through the list in order applying the operations to build our mesh.

Now that we know we are using CSG operations we can start to formulate a plan to create a planet using this stuff.   So what's the process?

First off we create a base planet mesh.  In our case this is a sphere mesh that we create using code.  However, this base mesh could really be any shape you want and this mesh could potentially be modeled in a 3d package as there is a way to specify these directly.  This gives a nice back door to creating planets outside of my pipeline and getting them into the game for modders.  In addition you can have multiple base meshes if you want to.

Once we have the "base" mesh I distort it using simplex noise to generate some interesting height data.  In fact there are several phases here where I calculate a bunch of things like distance to water fields, height data etc.  This information is then used to generate the planetary biomes.  This process is controllable through the planet archetype spec which is externally supplied for each planet type.  Biomes are also fully externally configurable from a json text file.

Once we have the surface covered in different biomes we then use the biome specifications to start spreading brush geometry over the surface.  There are a bunch of properties which drive the placement of these.  Each one is defined as either an add or subtract operation.  So for example you can carve canyons using subtracts and add hills and mountains using add operation.  Each of these brushes pieces has it's own set of material information that's used to render it.  This source of texture information allows us to get really interesting detail that comes from the artist and is mapped directly. Being able to have the artist control this allows us to get a very specific look for any piece of the terrain.

There are a lot of interesting sub problems that start to crop up when you decide to map pre-made brushes to a sphere.  To combat the problem of the flat authored brush not mapping to the sphere we support several different types of projections.  These effectively wrap the brush geometry around the sphere and onto the terrain so that you can nicely control the final look.  There are several mapping options available but most of the time we project to the actual surface terrain at each vertex of the brush.  Height is handled such that the z=0 plane of the brush is mapped directly to the surface so you can control how far above or below ground that piece of the brush is.

Once we have the surface geometry properly laid out we simply perform the CSG operations that give us the final mesh.  An additional detail is that we break the world into plates each of which are processed separately. Doing it this way means we can locally rebuild the CSG at runtime when we add a new brush without having to modify the entire planet.  Only the plates effected by a particular brush need to be recalculated.  So for example a giant explosion that cuts a hole in the ground may completely remove some plates and partially update others.  This process is fairly slow but it's tractable to do it async from the rest of the sim and only swap it in once the heavy lifting has been done.

Below are some examples of the technique in action.  This isn't real game art, just example stuff that tries to get across the idea.

The planet is broken up into plates that extend to the center so they are manifold.   Here we are only displaying half the plates so that we can see the internal geometry which will never show up in the actual game.

Example of add brushes on the surface

Example of subtract brushes on the surface


Once we have the base geometry calculated we go through a similar process to place down all of the biome "features".  Features are basically meshes that are interactive and aren't part of the actual terrain geometry.  For example trees, rocks, icebergs etc. that can be reclaimed and destroyed by weapon fire.  These use a similar placement strategy to the brushes and the setup and control is pretty much the same code.

Probably the major different besides them not modifying the underlying terrain mesh is that the engine is setup to have vast numbers of these using instancing.  This is how we get things like giant forest that can catch on fire etc.  Effectively these use pretty much the same pipeline as units.


Now we have all of this great technology to create meshes, modify them at runtime etc we actually have to figure out how to render and texture them.  This is actually a fairly complex problem because the source meshes that go into building the terrain all have their own texturing which can be fairly complex.  Simply merging the meshes together and sorting by material is still going to give us a huge amount of render calls.  In addition it would be really nice to be able to do things like decals on the terrain for explosion marks, skirts around buildings or whatever else we feel like.  In previous games I used some runtime compositing techniques which I was never happy with.  But now we have a solution, Virtual Texturing!

So what is virtual texturing?  Probably the best explanation is Sean Barret's excellent sample implementation where he goes into great technical detail.  The simple explanation is that you create a unique parameterization of your mesh and map this into a 2D virtual space.  This gives every single piece of your mesh a unique set of pixels.  The GPU implementation then will request pieces of this texture space in page sized chunks which you then have to fill in.  In our case we dynamically generate the pages using the original geometry and texture mapping of the individual brushes as well as dynamic decals applied to the terrain.

Virtual texturing brings all kinds of goodness to the table. One obvious bit is now that we are using a single shader for the terrain we can render the entire thing in one render call without splitting it up.  It also puts a nice upper bound on how much video memory we use for the terrain.  It's also very scalable both upwards and downwards.  If you have a videocard with low memory you can turn down the texture detail and it will work with the only side effect being blurrier terrain.  If you have a ton of texture ram crank the detail up (of course there is a limit!).  The amount of space we need in the cache texture varies with the resolution of the screen and the chosen texel density on the terrain.

The ultimate result of this is that we get unique pixels on every surface of the planet which is a very good thing when we want to blow the hell out of it.


You can create your own archetypes, brushes, features etc. So new terrain types are fairly easy to add to the engine.  I'll be doing more posts about modding later but rest assured adding in new types of terrain is a design goal of the engine.  The art team is using the same tools that the end users will have for putting these terrains together.


Finally there is the actual gameplay component to this.  The general idea here is that the mesh that's output from the geometry generation process becomes something we call a Nav Mesh.  Now think back to the fact that the output of our process is manifold and you can see how we have a very nice closed surface to walk around on and do collision detection with.  The nav mesh is a true 3d data structure that allows multiple layers, caves, overhands and whatever other geometry we want to create as long as it's manifold.  When the terrain gets modified we simply replace nodes in the navigation mesh and notify the units that the terrain has changed. We'll go into more details on this structure later but it's worth an entire post of it's own.  Note that since we already have a unique parameterization from the virtual texture that we can map a grid to this if necessary for doing things like flow fields.


When we set out to design the terrain system we had some big challenges in front of us.  I think we've found a fairly good compromise and at a minimum at least it's going to be different from other games.  This system meets our requirements and I hope it turns out as awesome to play with as I think it will.

For those wanting to discuss this I suggest you come over to where most of the active discussion about the game happens.  You can reach me on twitter @jmavor and obviously I read the comments here as well.