Random stuff I feel like writing about including stories from my game development career, new tech ideas and whatever else comes to my mind.
Friday, March 22, 2013
Live stream with more tech details
I wanted to drop a link to our last live stream where we go into some more details on the planet generation and the navigation system.
Thursday, March 7, 2013
The Outland Games
We released our first iOS game today.
http://theoutlandgames.com
https://itunes.apple.com/us/app/outland-games/id591262172
I'm really happy at how this turned out. It's really a gorgeous little game with some seriously great personality. It's also running a version of the PA engine...
http://theoutlandgames.com
https://itunes.apple.com/us/app/outland-games/id591262172
I'm really happy at how this turned out. It's really a gorgeous little game with some seriously great personality. It's also running a version of the PA engine...
Saturday, March 2, 2013
UberRay - Uber's real-time raytracing API
We had the opportunity a few years ago to do some cutting edge implementation work on real-time raytracing. The idea was to solve the practical problems related to making a commercially viable ray tracer for actual games. The goal was to release a fully raytraced game as well as the UberRay API itself. Think about it as OpenGL or DirectX but for raytracing specifically. This way other people could have ray traced version of their games without worrying about the underlying implementation.
Now, first off, I'm not saying that ray-tracing is the holy grail. I've come to believe that it has a place in graphics but that hybrid techniques may be more the path we directly take. I believe that, as always, we will hack together a variety of techniques for any particular use case.
That being said I'm very concerned about the amount of artist time it takes to create content. How does this relate? When creating a modern engine it can be difficult to make everything work together in every case. We have issues with sorting, issues with techniques that take full screen passes and don't combine well etc. Most of the time we hack something together that works "good enough" and tell the artists to be careful about building certain types of things that break. This is an inherent inefficiency and it also means that artists have to understand technical limitations of the tools. What if we had an engine where every shader could play nicely together without worrying about composition as much? In other words we make the rendering regular so that it's more consistent and easier to work with. Can we get there with rasterization? Probably but I do think raytracing may become the model just because it's regular. Keep in mind that today performance is a top priority but as computing power increases it may become less of an issue.
So let's talk about so high level architectural details. First off ray tracing is a scene level algorithm. The program needs to know about the current state of the entire scene at once. Normal graphics API's like OpenGL don't include scene level functions, they are lower level. You typically combine render calls together in a set of frame buffers and build up the image directly out of these calls.
With ray tracing it's more like:
- Update the scene graph with current information about the scene
- build the acceleration data structures
- trace rays from a camera into a specific buffer
- any post processing we want to do including things like AA and tone mapping
Whereas a rasterize looks like:
- On the CPU figure out what meshes we want to draw in which order
- make draw calls to render the meshes, culling them and potentially multi-passing them
- could be doing lights deferred or not, most of the rendering code is similar
- then when we are finished do a bunch of post processing including again resolving AA and tone mapping
The main difference between them is really what order you are iterating the data in. The ray tracer iterates over the pixels and asks what the cover. The rasterizer iterates over the set of meshes and triangles and calculates which pixel they overlap.
So it's obvious that UberRay has to be a scene level graph. I hesitate to just call it a scene graph because it's a very simple one. For the most part it was simply a list of mesh instances, shaders and lights. However, as you'll see it did start to get a bit more complex by supporting things like volumetric effect, proper depth of field, particle systems, mathematically defined surfaces etc.
One of the largest areas that we spent time was on the shader system. William created an amazing interpreted shader language that could be used to create really complex modern era shaders. The language also had a code gen back-end that could target SSE. One of the base level shader ops was simply spawn another ray and return it's result so you could do things like reflections. A material was actually made up of multiple shaders. For example you would have a shader that directly computes the frame buffer color but you also have another shader that describes the transmission property of the material. Emission was another shader. We could do real refraction and even chromatic aberration by tracing three rays separately.
Anyway we never got to actually release the thing so I wanted to talk about it a little bit and show some old screen shots. Hopefully we can revive this project down the road when appropriate hardware is available. I don't think it will be that long until someone does a raytraced game just to get a different look going.
At this point the code is just sitting there doing nothing and has been for a couple of years. C'est la vie.
A shot from our renderer |
Now, first off, I'm not saying that ray-tracing is the holy grail. I've come to believe that it has a place in graphics but that hybrid techniques may be more the path we directly take. I believe that, as always, we will hack together a variety of techniques for any particular use case.
That being said I'm very concerned about the amount of artist time it takes to create content. How does this relate? When creating a modern engine it can be difficult to make everything work together in every case. We have issues with sorting, issues with techniques that take full screen passes and don't combine well etc. Most of the time we hack something together that works "good enough" and tell the artists to be careful about building certain types of things that break. This is an inherent inefficiency and it also means that artists have to understand technical limitations of the tools. What if we had an engine where every shader could play nicely together without worrying about composition as much? In other words we make the rendering regular so that it's more consistent and easier to work with. Can we get there with rasterization? Probably but I do think raytracing may become the model just because it's regular. Keep in mind that today performance is a top priority but as computing power increases it may become less of an issue.
Funhouse mirrors! |
So let's talk about so high level architectural details. First off ray tracing is a scene level algorithm. The program needs to know about the current state of the entire scene at once. Normal graphics API's like OpenGL don't include scene level functions, they are lower level. You typically combine render calls together in a set of frame buffers and build up the image directly out of these calls.
With ray tracing it's more like:
- Update the scene graph with current information about the scene
- build the acceleration data structures
- trace rays from a camera into a specific buffer
- any post processing we want to do including things like AA and tone mapping
Whereas a rasterize looks like:
- On the CPU figure out what meshes we want to draw in which order
- make draw calls to render the meshes, culling them and potentially multi-passing them
- could be doing lights deferred or not, most of the rendering code is similar
- then when we are finished do a bunch of post processing including again resolving AA and tone mapping
The main difference between them is really what order you are iterating the data in. The ray tracer iterates over the pixels and asks what the cover. The rasterizer iterates over the set of meshes and triangles and calculates which pixel they overlap.
Another angle |
So it's obvious that UberRay has to be a scene level graph. I hesitate to just call it a scene graph because it's a very simple one. For the most part it was simply a list of mesh instances, shaders and lights. However, as you'll see it did start to get a bit more complex by supporting things like volumetric effect, proper depth of field, particle systems, mathematically defined surfaces etc.
One of the largest areas that we spent time was on the shader system. William created an amazing interpreted shader language that could be used to create really complex modern era shaders. The language also had a code gen back-end that could target SSE. One of the base level shader ops was simply spawn another ray and return it's result so you could do things like reflections. A material was actually made up of multiple shaders. For example you would have a shader that directly computes the frame buffer color but you also have another shader that describes the transmission property of the material. Emission was another shader. We could do real refraction and even chromatic aberration by tracing three rays separately.
The light on his face is trasmission from a white light going through a film. |
Anyway we never got to actually release the thing so I wanted to talk about it a little bit and show some old screen shots. Hopefully we can revive this project down the road when appropriate hardware is available. I don't think it will be that long until someone does a raytraced game just to get a different look going.
At this point the code is just sitting there doing nothing and has been for a couple of years. C'est la vie.
Subscribe to:
Posts (Atom)