Saturday, April 28, 2012

Let's Talk Procedural Content


Procedural content has been a buzz word for a while now. The concept isn’t new but people seem to be using it a bit more than in the past.  Plenty of games have used procedural techniques in different areas.  For example in TA we used a tool called Bryce to generate landscapes.  Diablo and Minecraft used it for worlds.  Many other examples exist.

When this is brought up, people invariably think that if you are using procedural tools that your entire game has to be procedural. Forget about trying to do that. What I want to talk about is using procedural tools as part of the asset creation process to enhance the power of the tools that we already use.  Think about it as poly modeling with really advanced tools.

The end goal here is to allow artists to create things quickly and efficiently, customize them and save them off for use in the game.  This doesn’t imply that the procedural part has to be done at game runtime; these tools purely exist as a way for the artists to express themselves.  This has a few benefits including fitting into current pipelines and being able to use procedural techniques that are extremely processor intensive.  Pixar doesn’t send out the data they use to render their movies, they send out final rendered frames.  Same idea.  Sure being able to do it at runtime has some nice things going for it but it’s not always necessary in every case. 

There are a bunch of issues with creating the procedural models themselves and I’ll talk about that some other time.  Let’s just assume for a minute that we do have some procedural models that we want to use.  By models I mean something that takes a set of inputs and gives a processed output.  For example a procedural forest model would have things like type of tree, density of trees, overall canopy height etc and would generate trees, bushes etc in some area you’ve defined within your world.  A procedural terrain model would allow you to specify what the ground was made of, how hilly it is etc.

So let’s start out with tessellated terrain that’s slightly hilly.  In your head basically just picture some slightly rolling hills that are about a square kilometer.  Now let’s select that terrain and create a procedural forest model that overlaps it.  We set the canopy to about 50ish feet, set the density fairly high and bam; we have a forest covering our terrain.  I’m sure someone’s engine already does this so I’m not really taking a leap here.
Ok, the next thing is to find a nice flat area because I want to put down a building.  If we don’t have a flat area we go back and fix that at the terrain generation stage.  The forest will automatically regen to conform to the terrain because we can always “re-run” the procedural simulation if we store them off (duh).

We have our flat area covered by forest because we selected the entire terrain.  Simply select the blank out area or whatever you want to call it tool and start erasing part of the forest model to create a clearing.  All of this erasing is itself a procedural model that’s being applied as a construction step.  It’s storing off the idea that the area should be cleared so that if we re-generate with slightly different parameters we still get the same basic result, a clearing in the forest.  Of course if we modify the initial forest parameters we can easily change the type, density etc. of the forest.

Now that we have a hilly forest with a flat clearing in the middle we can start to add interesting game-play stuff.  The forest itself is really a fancy backdrop but we can customize any particular element of to our specific needs.  So the next thing I want to add is a concrete bunker that has some shiny new metal parts on it including a door, various panels, bolts, pipes etc.  This bunker could itself be a form of procedural model (e.g. a specialization of procedural buildings) or it could be a bespoke item created for game play purposes.  Either way picture us plunking down a concrete bunker here.

Now we have a dense forest on rolling hills with a concrete bunker in the middle of a clearing. Let’s get started on the actual interesting stuff! The next level of procedural generation past this is to start doing serious simulation on the world.  This is going to require a format that is flexible and allows both the textures and geometry of the world to be extensively modified.  There are a lot of different formats that could be tried and as long as we can convert between them we can come up with different representations for different algorithms.  Some sort of voxel type format might make a lot of sense for all of the models but I don’t want to make any particular assumption about the best way to do it.

First off let’s make it rain.  We set the procedural weather model to be rainy (skycube to match!).  We literally simulate water coming down from the sky.  We see where it bounces; we potentially let it erode dirt.  We let it form channels and see where it flows to.  We can run this simulation for a little bit of time, or a lot.  We can have a lot of rain, or a little.  The longer we run it the more extreme effects it has.  For example it could start to rust and streak any metal.  Picture our shiny bunker getting long streaks of rust running down from the panels and bolts.  Picture the rain causing erosion as it goes down the hills and potentially forming lakes, puddles and streams.   At any point we can “lock in” the simulation to capture how full the streams and lakes are etc.  The point being we’ve taken a very process intensive process and applied it to try and get a lot of bespoke looking stuff without having to model these processes in the head of an artist.

Now let’s go back to our forest.  We started with a simple model that generated a dense forest to make life easy.  But is there any reason that a forest aging model couldn’t be applied to the forest at the same time we are simulating the rain?  For example trees that are near water could grow larger.  Areas with sparse water could clear and eventually switch to other types of vegetation.  We could simulate sunlight and plant growth patterns.  We know how to do shadows pretty well, let’s simulate the sun moving across the sky and see which patches get more sunlight. Picture our streaked bunked with grass starting to grow around it, vines snaking up the side, bushes growing up beside it and having their shape affected by the structure.  If you ran it long enough maybe the forest itself would start to intrude on the structure and possibly even crack the concrete structure over time to create a ruin.  As we improve our physics model we can certainly do things like finite element analysis to figure out and simulate breakage.   

Are you starting to understand the power of this kind of modeling?  We don’t need to be perfect, just create models that have interesting effects and combine them.  This could be really cool even when simply done. Keep in mind I’m not taking any real leaps here, most of this is based on stuff I’ve seen happening already.  I’ve seen water erosion in real-time, I’ve seen procedural forests, I’ve seen rust streaking.  What we really need is a structure to build all of this stuff on top of.  The majority of the actual work is going to be defining all of the procedural models. But that’s something that can be centrally done.  Once people have created those models they can be re-used and upgraded in quality over time.

If we could define an extensible procedural model format that could plug into multiple tools we would really have something. This is an ongoing area of research, for example GML. A common net repository of models that anyone could contribute too that had standard (and actually usable) license terms would mean we could all benefit. You can still build whatever bespoke stuff you need for your game but the time wasting work that’s the same every time could be made easier.

All that I’ve said here is a straightforward application of stuff we can already do.  There are numerous products out there headed in this direction right now.  CityEngine and SpeedTree are just some of the examples among many.

Obviously there is a lot of research to be done on this stuff going forward.  I foresee more research on how best to represent these models, better simulation tools and lots of specialty models that do a domain specific job.

Speaking of specialty models human characters are already widely created using procedural models.  A lot of game companies have a custom solution to this and they all have different strengths and limitations.  This could go away if we have a good commercial package that was a high quality implementation.  It would create humans out of the box but the algorithms could be customized and extended (or downloaded from someone else who has done that).   There is simply no reason that this needs to be reimplemented by every game company on earth.  It’s an active area of research already obviously.  Of course creating the geometry is hard, getting it to animate is harder.  But we’ll crack it eventually.

There are plenty of examples where specialized models would come in handy. This would enable creation of all kinds of animals and plants, specific types of rock etc.   Hopefully the standard modeling language would be expressive enough to do most normal things.  Domain specific ways of thinking about particular models will probably be necessary in some (a lot?) of cases.  A long as we have a well defined environment for these models to interact in I think we can get away with a lot of hacky stuff at first.
If you are going to have a procedural forest it would make sense to have it include animals.  Deer, bears, cattle, birds or whatever could be modeled and then given time based behavioral simulation.  There is already precedent for actual mating of models so it’s possible we could simulate multiple generations and actually evolve new populations.  This level of modeling is getting pretty sophisticated but I don’t see any intrinsic limitation.

Once you have a forest populated with procedural animals you can consider extending these algorithms ever further.  For example you could simulate the animal populations and their effects on the world so that over time you converge to some interesting results.  That natural cave now has a bear living in it.  Lakes and streams have fish that have evolved to use particular spawning grounds.   Areas trampled often by animals could have less vegetation.  Grazing areas and other types of foliage could be eaten or even spread by the animal population moving seeds about.  Most of the behaviors I’m describing could be modeled using simple techniques that would still probably produce neat results.  For example to model foliage destruction you could use a heat map of where the simulated animals walked.  Look for seeds sources and spreads them across the heat map graph using high traffic areas as connections.  Then have those seeds germinate over time based on all of the other factors we’ve talked about.

Now I’ve used a forest as an example here but that’s just one type of operator.  There will be meta operators at every level here that feed into and effect one another.  For example you could do a planetary generator that runs a meta simulation to decide where to place forest operators.  Oceans in tropical areas near shorelines could generate coral reefs operators.   The weather operator combined with elevation would define temperature which could feed into other processes like snow level.  I can see people creating a ton of custom operators that when combined together do fascinating things.  Volcanoes that shape the landscape.  Craters that are created through simulated bombardment and then eroded.  Plate tectonics to shape continents?  The ideas are endless if we have a framework to play around in.

Pie in the sky?  Let’s check back in 2040 and laugh at the primitive ideas presented here.  Or maybe our AI assistant will laugh for us.

Just wait until I start talking about augmented reality modeling tools.  But that’s another story…

Friday, April 27, 2012

Strange Happenings at the Circle-K

I couldn't figure out a title for this one. It's just a random story from when we were working on Radix.

We needed to outsource some of the art because we were mostly coders. Specifically we were looking for work on textures, the main ship and enemies.

So a couple of guys we were working with hooked us up with these brothers.  These two guys had been doing some playing around with 3d studio, photoshop and scanning slides to make cool images.  They were, I think, primarily interested in doing web type work.  Gaming always seems to attract people though and they definitely seemed excited about being involved with a game project.

Anyway, we meet them for the first time and talk about the project. They didn't have any kind of 3d model portfolio or anything because they had never done it before.  This should have been a red flag, but back then everything was new and it was kind of hard to tell if someone could do something or not. So we agreed with them that we would meet up again in a couple of weeks and they would create some demo stuff for us.  Specifically they were going to bring some frames of animation of the main ship flying and banking so we could get something better looking into the game, as a test.

So a week or two afterwards we all meet up at the office (not our office, our friends office) and they have a disk with them. We are all excited to see what they've come up with. We insert the floppy (!!) into the drive, list the files and they look good.  At the time we used Autodesk Animator for all of our texture needs which used  the .cel format natively.   We load up AA and try to load the first file.

AA pops up a message that says "corrupt file" and doesn't want to load it.  Dammit, we thought.  A bad disk.  Let's try the next file.  Same problems. Etc for about all 8 files. 

Keep in mind we were excited to see this stuff, we'd been waiting for weeks!  So we copy the files to the hard drive and take a closer look.  At this point we slip off to go have lunch with the guys.  I think it was $1 big macs or something because I seem to recall a giant stack of big mac boxes.  While we are away one of our coders was looking at the files.  Now you have to realize we used this format in our game engine so we knew what the header was supposed to look like.   In fact we used this format so much that we didn't even have to look at it in the debugger, we could literally interpret the files in a hex editor just by looking at them.



It was an early version of this format.  As you can see viewing this in a hex editor it would be pretty apparent if there was a problem.  You would expect a size number that was relatively close to the width*height with the header and the palette.  The magic number would be clearly visible as well as the width and the height.  In this case they should have been something like 128x128 (0x80).

Instead what we saw was something like:
20 20 20 20 20 20 20 20 20 20 20 20 20 20 with the rest looking like it was valid.  E.g. it looks like there was actual image data in the file later but the header was stomped with 0x20 over and over.  What kind of possible bug could have caused this bizarre stomping, which was slightly inconsistent from file to file to happen?

Before we got any further worrying about that we sent the Brothers home so we could talk about it and look at the files more. Once they were gone we simply rebuilt the correct header and got them loading into AA.  It seem became apparent what had happened. Basically they did contain some rendered ship frames, but they were utterly atrocious. Yes, we did recover them correctly, they were just really shitty looking. Our theory was that they purposefully corrupted the files to make it look like they had made the deadline because they wanted more time. 

We didn't have much patience for that kind of bullshit so we told them not to bother.  I remember the conference call where we confronted them and we had it right on the money.

Eventually we ended up having the guys we were working with (more about them later) do most of the art.  We also outsourced quite a bit to Cygnus Multimedia but they were pricey and it was hard to get the exact results we wanted.

Anyway chalk it up to one more learning experience.  Sometimes bizarre shit happens.

Clear Communication

When you are doing any business deal it's really important to be on the same page. Don't make any assumptions about how the other party sees the deal.  Be direct and upfront about how you interpret the terms so that there are no misunderstandings.  It's not "cool" to sneak some shitty term into a contract.

The reason I bring this up is the very first deal I was involved in.  This was at the beginning of Radix where we hooked up with an existing company.  The idea is that they would bring in some artists and also help with cashflow.  They were also supplying some (pathetic) equipment and space to work (drafty).

So my understanding is that they were going to pay us $20 a hour plus we would both own the project and split the proceeds.  This is before I even got the concept of game IP or the long term value that it could have. 

Anyway they worked in this old building with dodgy wiring and crap heating.  They gave us one computer and a whiteboard to share amongst us.  The "pay" would be deferred until we actually made money somehow.  Basically it was a total waste of time.

But at the time I didn't see it.  One of the other guys was like "wtf are we doing" and refused to participate anymore.  It really took that for the rest of us to look at the deal and realize we had no fucking idea what we were really doing there.

Now I was the guy that initially hooked us up with these guys.  I had met them through the local BBS scene and their main programmer was a good friend of mine.  His business partner on the other hand was this slick car salesman like guy who ran the business.  Somehow I also noticed my friend (who did the work) never had any money but this guy had a new car, nicely furnished new place etc.  They split up a while later and he got a real job (at I think Cognos).

So we all sit down and write up an actual contract that we think represents the deal that we've already negotiated.  We walk in and tell these guys we need to meet and talk about the details and present them with the contract.  The biz guy starts reading the first page and is like, "ok, ok, ok" etc.  So he looks up and says, "No problem guys this looks fine."

Then Dan says, "What about the second page?" which is where all the meat was.

So car guy looks up and goes, "Second page?" in a bewildered voice, flips the page and starts going, "No! No! No! No! No! Completely unacceptable!"  He looks up at me and tells me it's my fault.

That was the last time we were over there. Dan ended up hooking us up with some other guys who actually knew games and they helped us ship Radix.  More on that later...

Thursday, April 26, 2012

Thred - Three Dimensional Editor

The story of Thred.  This is the program that taught me about the emerging power of the internet. 

In December of 1995 we shipped Radix: Beyond the Void as a shareware title.  I'll write up the story of Radix some other time but basically it didn't sell well and the team broke up.  I ended up taking a few months off, going to GDC and meeting Chris Taylor who offered me a job on TA.  However, during the time between when he offered me the job and when I started we needed to deal with visa paperwork crap.  So I was sitting at my desk in Kanata knowing I had a job with nothing to do until the paperwork came in.

So I did what any programmer would do, I started on a new project.  At the time we wrapped up Radix Shahzad had been working on some new engine tech that could render arbitrary 3d environments.  During Radix we used a 2D editor I wrote called RadCad to make the levels.   But to make 3d environments we had nothing.  I really just wanted to learn how to write a fully 3d editor because I knew that was the next step and I was excited to get going.

Let me digress a bit and talk about BSP trees.  This was a really hot topic at the time because Carmack had used 2D BSP's in doom.  When we initially wrote the Radix engine we used a ray-casting approach but it was just suboptimal compared to the BSP.  I somehow managed to shoehorn in a 2D BSP into Radix not long before we shipped so I had an understanding of how they worked.

So I started out writing a simple editor that could draw polygonal shapes with no textures and no shading.  Once I had some basic navigation and movement I started to add the ability to CSG the shapes together to build more complex pieces of level.  There was a lot of this kind of research going on at the time including Quake which was coming soon.

So to do the 3D CSG that was required I experimented with a ton of different BSP construction methods.  The easiest one is to just build a BSP one piece at a time doing the correct operation but it results in pretty bad splits.  If you rebuild from scratch you can balance the tree better and improve the result.  Anyway the bottom line is that I got it mostly working in about a week or so.

And that probably would have been it if it wasn't for a certain confluence of events.  Even though Radix didn't do very well we had a few fans of the game.  One of them was a really smart guy named Jim Lowell who pointed out that the editor (which I had shown him) could be used to edit Quake levels.  Now Quake wasn't out yet but there was the quake test and all of that so we generally knew how it worked.

The real genius Jim had though was to post this on a web page.  Remember, in 1996 the web wasn't that mature yet.  I hadn't really grasped how much it was going to change the world.  Bottom line, the first version of Quake came out (a test or maybe the shareware something) and I managed to get a level up and running in about 3 hours.

Jim pretty much immediately posted some screenshots of the level and may have actually released a level.  This caused quite a stir as people were making quake levels using a text editor at the time.  I had just lucked into being the first person in the world with this tech ready to go.

And then I got an email from Carmack. I wish I still had access to my email archives from the Ottawa freenet but I don't.  Basically he asked me how I had cracked the protection... or if he had boneheaded the check.  I said he must have boneheaded it (he had) because I didn't have to do anything to get it to work.  Quake was supposed to need a valid license file before reading 3rd party levels.  Oops.

Anyway id was nice enough to send me a burned version of the full game when it was released. I think I still have this somewhere although it may have humungous stickers on it.

This is where the story goes crazy.  I started getting massive numbers of emails from people looking to license the editor, have me write a book, work for them etc.  Now I was already planning on going to work at Cavedog so I wasn't terribly interested in most of this stuff.  But I got interested quickly when people started wanting to pay serious cash for the editor.  The power of the internet was revealing itself to me.

The guy that intrigued me the most was someone named Steve Foust from Minnesota.  Steve called me and told me he would give me a bunch of money upfront and a back end royalty if I didn't release the editor.  He wanted to license it, make a quake level pack while nobody else could and make a killing.  He was fine with me releasing it after the pack came out.  The company steve was with was called Head Games and it was eventually acquired by Activision.

A book company called me and wanted me to write a book. I'd never considered that and didn't really think it was up my alley.  Based on what I understand of the economics of it I probably dodged a bullet as game books don't usually make any money.  A lot of publishers will pay extremely low royalties like 10% on books.  Bottom line, I was a better programmer than author.   I would probably be more into that today but why not just blog if I have something to say?

Anyway I eventually got my paperwork and headed for Seattle.  Aftershock for Quake came out and I released the editor to the world.  I also licensed it to a bunch of companies including Sierra On-line and Eclipse Entertainment.  It also got me an interview at Raven, where they did offer me a job that I declined.  Seattle and TA was waiting and a lot more exciting.

All of the licensing was my first real exposure to these kinds of contracts.  We had done a shareware publishing deal with Epic before that, but that's pretty much it.  I left a lot of money on the table back then because I was a horrible negotiator.  I had one of the licensees tell me recently that he probably would have paid me double the amount if I had bothered to hold out at all.  This is called paying your dues or stupid tax in Dave Ramsey speak. 

Also if you feel like I'm focusing on the financials a lot in this it's because I am.  My family wasn't very well off and I had a lot of things I wanted that I was working hard for.  Money was definitely a way that I measured my progress towards my goals.  As Zig Zigler says success is the progressive realization of a worthwhile goal.  I don't know if a Mustang Cobra was a worthwhile goal or not but by October of 1996 I had a brand new black Cobra.  That was, by far, the most exciting material thing that I've ever bought.  I had been lusting after 5.0 Mustangs for years before I even got my license.  Next I needed to get a WA state drivers license but that's another story.  Bottom line, Thred is the program that for the first time in my life put me in a financial position that wasn't shit.

Thred is also how I met Gabe Newell for the first time.   Not long after Valve was founded I went over there to talk to them and see what they were about.  They had already hired Yahn Bernier who wrote the editor BSP as well as Ben Morris who wrote WorldCraft, so they didn't need another editor.  We didn't really talk jobs because I was happy working at Cavedog. Our office at Uber today is just down the road from that building where Valve started.  Valve today blows my mind btw but... that's another story.

Anyway I think the fan community had a lot higher expectations than I had.  To me Thred was just a tool to learn what I need to know about 3D editing. I think the community expected me to make a career of expanding and supporting Thred. In reality I was busy writing new code for my employer and trying to actually create a top level game. Jim really took most of the flack for this, sorry about that Jim.

The fact that the first title I got work on after Radix was Total Annihilation is just an incredible bit of luck.  I did pass up other opportunities including more Thred work or even working at Raven because Chris had utterly convinced me that his vision was the next big thing.  I was a hardcore C&C fan and I wasn't going to lose the opportunity to work on what I thought was going to be a brilliant game. 

 Eventually the thought process behind Thred would turn into Eden which was the Cavedog editor for the Amen Engine.  But that's another story...


Playoff Hockey

Well the Senators are officially out again and this may be Alfie's last game.  Oh well, I guess this means I don't have to watch any more hockey and can get back to concentrating on work.

Wednesday, April 25, 2012

Dependency Injection

Does anyone else really enjoy writing code using straight up injection?  Some of the work I've been doing recently I've completely avoided global variables and inject every bit of state.  It naturally helps multithreading but it has a lot of other nice things going for it as well.  Global variables are just poison.  Sometime I'll go into some detail on the advantages that the MB engine has because of it.


Total Annihilation Graphics Engine

For a long time I've wanted to spend some time writing down my recollections of what I did on the TA graphics engine.  It was a weird time, just before hardware acceleration showed up.  Early hardware acceleration had pretty insane driver overhead.  For example the first glide API did triangle setup because the hardware didn't have it yet. Accelerated transform was out of the question. Anyway none of this was really a factor because that stuff was just showing up when we were working on TA and we couldn't have sold any games on it.

Anyway I met Chris at GDC in 1996 and he fairly quickly offered me a job working on the game.  I had just wrapped up work on Radix a few months before and was looking for something new since most of the Radix guys were going back to school.

So I went back to Ottawa and while I waited for visa paperwork to move to the states I ended up writing Thred which became a whole other story that I'll talk about some other time.  Once the visa paperwork came through I moved to Seattle at the end of July 1996 just in time for Seafair.

Monday morning rolls around and I start meeting my new co-workers and getting the vibe.  I got a brand new smoking hot Pentium 166Mhz right out of a Dell box.  Upgraded to 32mb of ram even!  That was the first time I ever saw a DIMM incidentally. We all ooo'd and ahhh'd over this new amazing DIMM technology. I was super excited to be there and actually getting paid too!

I had already done a bit of work remotely so I had a little bit of an idea about the code but I hadn't seen the whole picture. The engine was primarily written in C using mostly fixed point math.  At that point using floats wasn't really done but it made sense to start using them. So we did.  This means we ended up with an engine that was a blend of fixed point and floating point, including a decent amount of floating point asm code.  Ugh.  Jeff and I both tried to rip out the fixed point stuff but it was ingrained too deep.  Oh well.

So my primary challenge on the rendering side was to increase the performance of the unit rendering, improve image quality and add new features to support the game play.

The engine was also very limited graphically in a lot of ways because it was using an 8-bit palette. This meant I had to use a lot of lookup tables to do things like simple alpha blending.  Even simple Gouraud shading would require a lookup table for the light intensity value. Nasty compared to what we do today.  The artists did come up with a versatile palette for the game but 256 colors is still 256 colors at the end of the day.

Getting all of the units to render as real 3d objects was slow.  Basically all of the units and buildings were 3d models.  Everything else was either a tiled terrain backdrop or what we called a feature which was just an animated sprite (e.g. trees). 

So there were a few obvious things to do to make this faster.  One of them was to somehow cache the 3d units and turn them into a sprite which could be rendered a lot more quickly.  For a normal unit like a tank we would cache off a bitmap that contained the image of the tank rendered at the correct orientation (we call this an imposter today, look up talisman). There was a caching system with a pool that we could ask to give us the bitmap.  It could de-allocate to make room in the cache using a simple round robin scheme.  The more memory your machine had the bigger the cache was up to some limit.  We would store off the orientation of that image and then simply blt it to the screen to draw the tank.  If a tank was driving across flat terrain at the same angle we could move the bitmap around because we used an orthographic projection.  Units sitting on the ground doing nothing were effectively turned into bitmaps.  Wreckage too.

There was another wrinkle here; the actual units were made from polygons that had to be sorted.  But sometimes the animators would move the polys through each other which caused weird popping so a static sorting was no good.  In addition it didn't handle intersection at all.  So I decided to double the size of the bitmap that I used and Z-buffer the unit (in 8-bits) only against itself.  So it was still turned into a bitmap but at least the unit itself could intersect, animate etc without having worry about it. I think at the time this was the correct decision and actually having a full screen Z-buffer for the game probably also would have been the correct decision (instead we rendered in layers).

Now all of this sounds great but there were other issues.  For example a lot of units moving on the screen at the same time could still bring the machine to its knees.  I could limit this to some extent by limiting the numbers of units that got updated any given frame.  For example rotation could be snapped more which means not every unit has to get rendered every frame.  Of course units of the same type with the same transform could just use the same sprite.  Even with everything I could come up with at the time you could still worst case it and kill performance.  Sorry!  I was given a task that was pretty hard and I did my best.

Once I had all the moving units going I realized I had a problem.  The animators wanted the buildings to animate with spinney things and other objects that moved every frame!  The buildings were some of the most expensive units to render because of their size and complexity.  By even animating one part they were flushing the cache every frame and killing performance.  So I came up with another idea.  I split the building into animating and non-animating parts.  I pre-rendered the non-animating parts into a buffer and kept around the z-buffer.  Then each frame I rendered just the animating parts against that base texture using the z-buffer and then used the result for the screen.  I retrospect I could have sped this up by doing this part on the screen itself but there were some logistical issues due to other optimizations.

After I had the building split out, the animating stuff split out, the z-buffering and the caching I still had a few more things I needed to do.  I haven't talked about shadows at all.  Unit shadows and building shadows were handled differently.  Unit shadows simply took the cached texture and rendered it offset from the unit with a special shader (shader haha it was a special blt routine really) that used a darkening palette lookup.  E.g. if there was anything at that texel just render shadow there like an alpha test type deal.  This gave me some extra bang for the buck in the caching because I had another great use for that texture and I think the shadows hold up well.

Not all was well in shadow land with it came to buildings though.  Due to their tall spires and general complexity I decided to go ahead and properly project the shadows.  This ended up significantly increasing the footprint of the buildings and the fill rate started to become sub-optimal because a single building could really take up a lot of the screen.  Render the shadow (which overlaps the building and a lot more) then render the building itself on top and you are just wasting bandwidth.  So the next step was to render the projected shadow, render the building (both into the cache) then cut out the shape of the building from the shadow and then RLE encode the shadow since it's all the same intensity.   Now rendering consisted of render the shadow (not overlapping and faster because it's a few RLE spans) and then render the building.  Ahhh... way faster.

Now the whole way that TA did texture mapping was just screwed.  Frankly we had no idea what we were doing.  Jeff knew it was fucked but it was just so already built that it wasn't changeable in the time we had anyway.  I could do a 100x better job at this today 16 years later.

So we had some pretty serious image quality issues mostly related to aliasing, especially of textures (there were no UV's each quad had a texture with a specific rotation stretched to it).  So the one thing I did that I think worked well is anti-alias the buildings.  Basically for the non-animating part of the building I would allocate a buffer that was double the size in each dimension.  I rendered the building at this larger size and then anti-aliased that into the final cache.  So the AA only happened once when it got cached which means I could spend some cycles.  This only applied to buildings.

Now doing AA in 8-bits is going to require some sort of lookup table.  Since I had 4 pixels that I wanted to shrink down to 1 pixel I came up with a simple solution. It's very similar to what we use for bloom type stuff today which is simply separating the vertical and horizontal elements.  So the lookup table took 2 8-bit values and returned a single 8-bit value that represented the closest color in the palette to the average of the colors.  No I didn't take into account gamma correctly or much else to be honest.  Anyway I would simply do a lookup on the top two pixel and the bottom two pixels.  The results from those two ops were then looked up to give me the final color, so 3 lookups.  It drastically improved the look up the buildings.
Except I fucked up and left a bug in there.  Ever notice that a lot of the buildings have a weird purple halo?  Basically the table broke when dealing with the edge and transparency because I didn't have a correct way to represent that. Then I ran out of time, I think I could have fixed it.

Anyway I wrote some particle system stuff, lighting effect stuff and some other cool effects that didn't get used (psionics!).  But the unit rendering was by far the most complicated part of the whole renderer and it's what I ended up spending the most time on.

BTW I still think TA was an amazing game and I'm still interested in pushing that kind of game more in the future.  It seems like every time I do an RTS a few years later I'm ready to take another shot at it (SupCom was the last one, I'll do some posts about its engine sometime too).

Long enough?

Running an independant game company

Running any company can be challenging. We've completely transformed from a product based company to a service based company over the last year.  Everything we do going forward is wrapped around either providing services to end users (Super Monday Night Combat) or supplying services to other developers (UberNet).

That's been very challenging but it's finally starting to pay off.  The stress of dealing with cash flow management, getting the services launched etc. has been pretty high for the last year.  We had some opportunities that in the short term would have made life easier but we chose to go a tougher but in the long term better route that maintains our independence.

In any company there is always a tension between short and long term goals.  How much do you invest in the future and how does that effect what you do today?  Often times there is no simple answer and you have to make choices based on your experience, goals, temperament and the current situation. 


Evolution Simulator

One of the things I've been interested in for a long time is trying to simulate evolution in various forms.  There have been quite a few attempts at this over the years and some quite interesting things have been created.

Anyway the problem I've been trying to wrap my head around is what the correct simulation substrate is.  I feel like having a very flexible substrate that can do "anything" is really important to give you enough flexibility to do something really great.

For example real evolution uses real physics with atoms, molecules etc.   Very complex structure can be created through the magic of protein folding which is simply the physical side effect of the nature of the protein molecules.   What's the equivalent machinery that we could simulate?

A straight up copy of real physics would do the trick well but that's a fairly tricky proposition.   Ultimately I think whatever the simulation is built on itself has to be flexible and expose the underlying system itself to modification.  So I'm been thinking along the lines of (very roughly) something like a core-wars type environment where the organisms can interact.  I'm starting to think though that there does need to be some sort of underlying "physics" of some sort to make all of this work and the "programming" of the organisms has to itself be represented in this physical environment.

That got me thinking about trying out something on the level of cells.   Probably not the same as human cells but basically a cell would contain a physical world representation, a blob of "code" inside it and a set of "inputs".  I would expect there to be things like programmable cell death, cell development etc.  In the simulation there would be ways for cells to stick to other cells (if they are amenable), communicate, develop, work together, spawn new cells (assuming they have the resources to do so) etc.

Each cells representation in the world is basically a physics object.   Constraints would allow them to stick together.  Motors could use resources but provide locomotion etc.  The shape could be defined using code inside of the cell.  Possibly the shape is just a convex hull around a defined point cloud and the cloud can itself animate and change the shape on the fly.  Or maybe it's just polygon soup.  This would enable some interesting stuff. 



Reproduction wouldn't be too defined in terms of the high level.  Cells can communicate and copy code with each other and spawn new cells when they have resources available to do so.  I'm not sure what the resources are, possibly we fill the environment with "empty" cells that can simply be attacked or absorbed.   Possibly need to define how matter transforms to energy and vice-versa.  Maybe connecting cells takes energy and breaking them apart releases it or something.

Anyway I would love to have some time to tinker with this kind of stuff.  Instead I'm just writing it down as running a software company tends to suck up your time.



Seattle Drivers

No rant is complete without mentioning the utter absurdity of driving in Seattle.

Brains.... rhyme with left laaanes and the zombies just sit there.   It's the passing lane not the go immediately there and block traffic lane.   Europeans would be appalled at the complete lack of lane discipline here.

If you drive down I-5 south from Seattle you can literally see the traffic clear as you get to California.  I'm not saying Cali drivers are perfect but at least the concept of a passing lane exists somewhat.

Tuesday, April 24, 2012

Asteroid Mining

So a bunch of guys have announced they are working on the goal of asteroid mining.  I've been saying for a long time that the first trillionaire will be created through commercial exploitation of space.  For a while I've been thinking it's going to be Elon Musk, but there are other contenders for sure.

The guys backing this:
http://www.planetaryresources.com/
Are already super rich.  This is exactly the kind of long term thinking that's needed to make this a reality.

You can do the research but the asteroid belt is filled with very expensive metals, water and other things that have tremendous value.  Like one smallish asteroid will have more platinum than all of the platinum ever mined on earth in the history of mankind value. There is an entire planet out there that has been crushed up and sprinkled.  We can only mine a small fraction of the earth itself because we just can't get at the interior.  Imagine that level of resources made available to us.

Oh and by the way space based solar power built with these materials is IMHO completely viable and great long term strategy for powering earth.  In my vision of the future we move as much dirty manufacturing as we can off earth and try to make it a nice liveable garden planet.

Bottom line, long term thinking like this is needed to solve out true long term sustainability problems.


Spezza

I don't get Spezza.  When the guy shoots the puck he scores goals.  But I see him dancing around with the puck and passing it off at weird times.  Just shoot the puck and get some rebounds man!

Kickstarter

Kickstarter, we need to talk about it.

First off I think it's a great idea.  Let the customers decide what to fund.  It just makes sense.

That being said there are some serious downsides.  One of which is that customers generally don't have any idea what this stuff actually costs.  If you get someone who is unscrupulous they could game kickstarter without delivering product.  In fact I expect a lot of these projects to blow up, flame out or simply be disappointing garbage.

In most cases I think the originators of the projects have good intentions.   But if you haven't shipped a game before in a role that allows you financial insight into the project then you are simply going off half-cocked.   I expect a tower of burning corpses of the projects over the next 18 months or so.

Where does the responsibility of kickstarter to vet these projects begin and end? 

Random today stuff

So my talk at Amazon today went pretty well.   It was a decent crowd and I pretty much just winged it using my pp deck as a way to remind me what I wanted to say.  I managed to get it all out in the allotted time.

Spending time today working on a lot legal junk like eula's, contracts and other administrative junk like that.

Speaking of which our lawyer Bill Carleton is freakin amazing.   He was recommend to us at the beginning of Uber and I couldn't be happier.   http://www.wac6.com/

BTW That's not to say there aren't plenty of other great attorneys around who know how to deal with the games business.   I really like Tom Buscaglia as well.  The problem with Tom is that he's such a good friend I have a hard time separating out our relationship.   Tom is a go to guy for smaller startups who want to do game specific deals with people like MS, Steam and whole lot more.   http://www.gameattorney.com/blog/

Interestingly I actually met Tom before he got into the industry.  E3 1998 or so he was taking his first steps into the industry and we were getting rides over to the convention center in his awesome towncar rental (with air condition which is nice to have in Atlanta).

So I'm just writing some notes here on some topics I would like to cover in the future.

Definitely a lot of brain dumps on old stuff I've worked on.  Radix, Thred, Total Annihilation, SupCom, MNC, SMNC, some realtime raytracing stuff and UberNet.  I think just writing down my thoughts on previous projects will probably spark some new ideas.

And then future research as well.  More on augmented reality.  Some ideas I have for next-gen procedural pipelines. Maybe some thoughts on some game designs I've got brewing in my head (an RTS for one).  Any techniques I stumble across that I care enough to talk about.  Cars?  Who knows.  It's my fucking blog and I want it now! 


Monday, April 23, 2012

stuff

For some reason I've felt like writing about various topics lately.  I think mainly because I haven't had the time to think about exploring other ideas lately.  I've never been one for posting a lot of stuff publically so this is kind of an experiment.  We'll see if it continues.  Surely don't expect anything here regularly.  But when I feel like writing about something I find interesting... well then I will.

So tomorrow I'm going to give a lunch talk at Amazon.  They didn't really give me a topic so I'm going to talk about what I want, which is Uber, our games and UberNet our F2P back-end platform.  Generally speaking I don't do a lot of public speaking.  Not for any particular reason as I'm quite comfortable in front of a crowd.

I think lately mainly I'm more interested in it because I feel like I'm doing interesting things and thinking interesting things that need to be talked about more.  I was also inspired by John Stevenson's talk last year at dice.  He swore like a motherfucker and got away with it because of how he worked the crowd.  I realized I don't need to be uptight and should let most of my real personality shine through.

I also figure if I give enough talks and people aren't entertained that the whole things will sort itself out somehow in the end.  Think of it as a market response.

a reddit rant

Reddit post I wrote in a very tired fugue state after shipping SMNC.  I've been inspired to think about this stuff by the Google and Valve projects.

http://www.reddit.com/r/programming/comments/sjgm5/michael_abrash_valve_how_i_got_here_what_its_like/c4eud6a

Ok let me take a shot at it.

Warning: extremely long week, very tired, just launched our game this week. I may not be writing coherently at the moment.
First off there are definitely technical problems as you've pointed out.
I would say the most difficult thing to nail is going to be the display. As you've pointed out eye strain can be a problem for example. You also need to be compatible with glasses and probably contacts. There are interesting pieces of tech that can do a good job of this. Currently these are mostly military in nature but there is a company close to Valve that has been working on this for years. Check out Microvision for example. They make laser based displays including pico projectors that are quite sweet. This company has been languishing lately and Valve could buy them for <$100M. In the medium term this is a solved problem. One of the really nice things about these laser based displays is the low latency making high refresh rates possible which is important for my next point
The next technical issue is tracking your head movement quickly. If you want to render anything "solid" into the world in 3D you would need to be able to keep up with head movement very quickly. Otherwise you tend to make people sick. This is mostly what killed this tech during the 90's. The key to solving this is low latency algorithms paying attention to the environment using the cameras as well as the tilt sensor tech that things like an iphone uses. We need more work on the correct algorithms to really make the display solid as you look around and have things anchored correctly. If we can get the latency down to a couple of ms well be fine (say 250hz or better) based on my research.
Now once we have glasses that have a low latency "solid" display that can track the world properly we can start doing really interesting things. With multiple cameras and other tricks we can get a depth buffer for the world and composite our own images in 3D.
The Google glass demo was dumb as you said. They didn't have any imagination and are basically showing off a phone in the corner of your vision. Bah, lame.
So here are a few ideas I think would be cool. Early on it's going to be difficult to "read" the world and figure out what's going on so you can do intelligent stuff. When I think of something with a regular pattern the first thing that springs to mind is instruments.
Guitars have a very regular fret board that we should be able to track. Imaging wearing the glasses and having the guitar teach you chords by overlaying onto your vision. Imagine a game like Rocksmith that you play with the glasses on that can teach you songs, play along etc. You could even do a guitar tutor that takes you from beginner all the way to advanced. Want to always be reminded of which notes are where? Want a built in tuner that hangs in your vision? Want to see a display breaking down the details of your sound in the corner of the room? Want to have notes hover in the air while playing? Want virtual amp controls? Want to be standing on a stage in front of a huge audience? Want to share the stage with your favorite band? Tell me that's doable with a phone.
Now think about pianos and other instruments. Anything with regular patterns should be fairly easy to get right. Hell the piano doesn't move it would be even easier than the guitar. Picture all the notes labeled. Picture it showing you how to play songs etc. You get the picture.
Ok, on to other things. Gaming is really obvious. RTS games that take place in shared virtual table top space. Everyone wears glasses and sees the same "board". You could have spectators using a web cam or share your feed with the net. You could have players with their own boards in other cities. Just imagine an overlay for something like Settlers that keeps track of what goings on, has animations etc. Family game night from across the continent. Board games alone translated and made into interactive experiences would be huge.
I could go on and on about the game possibilities. Real life laser tag with your friends. It could have explosions, damage, health bars above people whatever you want. Basically bring games into the real world. Location based games could do crazy stuff with this tech. For example you could play a country wide deathmatch style game that was opt-in. Whenever you saw an enemy player in public you engage. Virtual gangs coming to a hood near you.
There is also the obvious avatar spec type stuff. Everyone agrees on an avatar spec system and if you so choose you can squawk what you are supposed to look like. Permanent cos play without a need to wear real clothes. I would expect multiple groups would co-exist so you might look different depending on which group someone is looking at. Or they could override it like you can with a ring tone today.
How about commercial applications? I can think of a ridiculous amount of applications in that space. The military uses this stuff for a reason. Plenty of work could benefit from a heads up display including IMHO coding. We can always use more monitor space.
All of the obvious world overlay stuff Google showed. Basically knowing anything Google knows about an object just by paying attention to it. Can you imagine getting used to that level of information flow?
Watch movies anywhere on a "big" screen. In 3D.
How about a full streamed recording of your life bonded and encrypted in the cloud? Want to know what you did on a particular day? control-shift-n to blank it out for a bit ;)
I also think that the form factor of glasses makes more sense than a little block box with a tiny screen. My prediction is that within a decade this technology replaces current cell phones in a ubiquitous way. It's the next generation of smart phone and once it's worked out you'll want one the same as everyone else. The future is going to be awesome. Now I just want a flying car.