Okay, I’m new around here. I don’t know who develops Blender, or when, or how long it takes to make a change or how deeply grounded in a background of raytracing they are. But whoever these angles of awesome are, who make Blender, I would like to offer a humble suggestion for improving the lighting engine.
Radiosity Solution = BAD. Too many polygons.
Traditional lowpoly video game shadow mapping (aka lightmapping) = GOOD!
What you want to do is have two layers of textures. A colormap that’s just a regular opaque texture, and then a shadowmap layered over it that uses a multiply filter. The Shadowmap should be a cube UV map applied to every surface in your game that will recieve static shadows. This entire process can be automated like the radiaosity solution is done now. Only, instead of subdividing the actual mesh and applying brightness data to vertecies (which results in thousands of coplanar faces for no apparent reason other than that vertex lighting is the only thing we know how to DO around here,) you apply the brightness data to PIXELS in the BITMAP that is then cube UV mapped over your entire level.
The end result is, even if you have a six-sided room that has all kinds of hilights and shadows and spotlights shining on its walls, it’s still only displaying 12 surfaces at runtime. Six walls with your nice little brick texture tiled across it, and six UV-mapped surfaces with shadows and highlights. Most modern graphics cards even optimize the process of blending the two textures together. You’re left with exponentially fewer polygons to draw at runtime, which means more polygons that can be devoted to monsters, explosions, particles, and whatever else you need to draw to the screen.
Now, as near as I can tell, (and my understanding of the details is pretty hazy, I’ll admit,) Blender currently has no built-in ability to blend two textures across a single UV surface in the 3D game engine. If I’m wrong, somebody please explain to me how to do it, as blending two textures is certianly the key to a true realtime lighting solution.
These are my recommendations for future iterations of the Blender Game Engine. I may be wrong and something I’ve listed here already exists within Blender, but this is my best guess based on my current (barely working) understanding of Blender and its game engine. Also, I realize that these are pie-in-the-sky ramblings that probabaly no Blender developers have time to work on, but maybe someday somebody will read this and know just how to do it. In any event, it couldn’t hurt to ask:
Seperate the controls for the Game Engine from the controls for the Raytracing side of Blender. This can be probabaly be done very simply, with a little work. The hardest part would probabaly be finding someone who already knows all the features of Blender and can tell which ones are only applicable to the game engine. Just add a little button to the User Preferences panel that says “Game Mode.” When you click that button, all the controls that are only relevant to the raytracing engine get greyed out, or have a little marker next to them so the user knows they have no effect on what they’re trying to do. You wouldn’t have to DISABLE the commands in one mode VS another, just designate them somehow, so it’s easier to pick up. Likewise, if there are any game commands that are never used by the raytracer, (I.E. game logic,) those could be greyed out when not in Game Mode. This would simplify things greatly for Blenderheads only interested in making games, as well as set up a distinction which will probabaly be pretty handy during the next step, which is:
Implement materials for game textures. Now, don’t panic. These wouldn’t be vertex colors and bump maps and generated patterns and whatnot like the raytracing engine uses. A 3D game engine material is just a series of textures meant to be used in tandem, with simple filters such as add, multiply, semitransparency, and opacity. Now, personally I’d say get rid of vertex colors altogether and make everything textured, but I’m sure some people find vertex coloring useful, and they’re delightfully efficient to draw to the screen, so I guess there’s no need to be too hasty on that one. But still. DirectX can do all kinds of things with textures, and it can do the bulk of it on your computer’s 3D video card, too. It’s just a matter of teaching Blender to talk to DirectX.
Devise a system that compiles lightmaps, rather than radiosity meshes, and applys them over the top of any existing textures, as a new texture layer. It might even be a good idea to set aside the top level of however many possible texture layers you allow per material, to serve as the lightmap layer. The lightmap will end up being a huge bitmap (in JPEG or whatever format Blender feels like using) usually in the neiborhood of 1024x1024 or bigger. (If automatic visibility culling is availible by this point, it might be a good idea to use smaller 512x512 lightmaps, and just use a different one for each culling sector, rather than one big one for the entire game level.) Most of the lightmaps I’ve seen the nuts and bolts of (Blitz3D, Half-Life, gile[s] ) ended up looking like a bunch of abstract 2D white and grey shapes cut out on a black background. Each shape is a low-res “cutout” in the general shape of a group of adjacent coplanar polygons. The lightmap compile process simply cubemaps the level (or for Blender, I guess it would be, whatever meshes are selected to recieve the lightmapping treatment,) assigns each planar segment of map its own tiny scrap of real estate on the lightmap texture, and then assigns lighting values to the different pixels on the lightmap. (Note that these lightmaps are NOT necessarily to scale with the RGB textures on the same walls. Usually the lightmaps are scaled much larger, for obvious reasons. A section of wall with a 256x256 brick texture on it might only have 16x16 pixels of light and shadow overlaying it. That’s fairly blocky and fuzzy, but it’s equivilent to the kind of results you’d see after about 100 steps of radiosity solving in Blender 2.36)
Wow, that’s a mouthfull.
Anyway, I’ve made a ton of assumptions here, not least of which is the assumption that somebody exists within the blender community who would know how to program all this and actually cares to do so. I am also guilty of assuming that there isn’t already some sort of lighting solution like this in the Blender game engine, or at least a system in place for layering textures, (which would make it possible to import B3D maps lit with gile[s], BSPs, etc. or possibly even create a lightmapping script using Python.)
I hope I haven’t offended or annoyed anybody TOO much with my incessant prattling on about lightmapping. It’s just that unlit Blender games don’t look so hot, and the radiosity solution, while pretty, is not feisible for anything larger or more epic than a proof-of-concept. (And even then you’d probabaly need to demonstrate it on a computer with WAY higher specs than your target user’s machine.)