How do I make halfway-decent game engine lighting?

Note: This first post has been completely rewritten as my undertsanding of Blender 2.36 has grown. Every time I learn more, I’ll update this first post, so people reading this thread and trying to learn from my mistakes won’t have to hunt sll over the place for valuable info.]

Apparently, HL1-style lighting in the Blender engine can be accomplished using the following process:

Run Blender’s Radiosity Solution
http://download.blender.org/documentation/oldsite/oldsite.blender3d.org/122_Blender%20tutorial%20Radiosity.html

Bake Vertext data into the decimated, nicely lit mesh
http://members.lycos.co.uk/legeis/downloads/ z3d_0texbaker.py

Transfer UV map from highpoly model to lowpoly model
http://members.lycos.co.uk/legeis/downloads/ uVEditor.py

These three steps will result in a map that has white walls and nice shadows cast upon it, rendered as a single UV map that wraps around your entire level. I’m not sure yet, but it’s starting to sound like the only way to combine this with RGB colormaps is to do it externally in a paint program, which of course means you can say adios to concepts like tiled textures.

If I’m understanding the Blender experts in this thread correctly, they’re telling me that I can have tiled wall and floor textures, OR I can have nicely shaded lighting on my walls, but I can not have both at the same time.

[Please reply to this thread if you have anything to add! Especially Blender Game Engine lighting links!]

You can fake light with textures (you have to draw yourself all the shadows).
You can make objects sensible to light (press “light” button in face mode).

you can also use Radiosity
try searching
https://blenderartists.org/forum/viewtopic.php?t=19241

Okay, I’m new around here. I don’t know who develops Blender, or when, or how long it takes to make a change or how deeply grounded in a background of raytracing they are. But whoever these angles of awesome are, who make Blender, I would like to offer a humble suggestion for improving the lighting engine.

Radiosity Solution = BAD. Too many polygons.

Traditional lowpoly video game shadow mapping (aka lightmapping) = GOOD!

What you want to do is have two layers of textures. A colormap that’s just a regular opaque texture, and then a shadowmap layered over it that uses a multiply filter. The Shadowmap should be a cube UV map applied to every surface in your game that will recieve static shadows. This entire process can be automated like the radiaosity solution is done now. Only, instead of subdividing the actual mesh and applying brightness data to vertecies (which results in thousands of coplanar faces for no apparent reason other than that vertex lighting is the only thing we know how to DO around here,) you apply the brightness data to PIXELS in the BITMAP that is then cube UV mapped over your entire level.

The end result is, even if you have a six-sided room that has all kinds of hilights and shadows and spotlights shining on its walls, it’s still only displaying 12 surfaces at runtime. Six walls with your nice little brick texture tiled across it, and six UV-mapped surfaces with shadows and highlights. Most modern graphics cards even optimize the process of blending the two textures together. You’re left with exponentially fewer polygons to draw at runtime, which means more polygons that can be devoted to monsters, explosions, particles, and whatever else you need to draw to the screen.

Now, as near as I can tell, (and my understanding of the details is pretty hazy, I’ll admit,) Blender currently has no built-in ability to blend two textures across a single UV surface in the 3D game engine. If I’m wrong, somebody please explain to me how to do it, as blending two textures is certianly the key to a true realtime lighting solution.

These are my recommendations for future iterations of the Blender Game Engine. I may be wrong and something I’ve listed here already exists within Blender, but this is my best guess based on my current (barely working) understanding of Blender and its game engine. Also, I realize that these are pie-in-the-sky ramblings that probabaly no Blender developers have time to work on, but maybe someday somebody will read this and know just how to do it. In any event, it couldn’t hurt to ask:

  1. Seperate the controls for the Game Engine from the controls for the Raytracing side of Blender. This can be probabaly be done very simply, with a little work. The hardest part would probabaly be finding someone who already knows all the features of Blender and can tell which ones are only applicable to the game engine. Just add a little button to the User Preferences panel that says “Game Mode.” When you click that button, all the controls that are only relevant to the raytracing engine get greyed out, or have a little marker next to them so the user knows they have no effect on what they’re trying to do. You wouldn’t have to DISABLE the commands in one mode VS another, just designate them somehow, so it’s easier to pick up. Likewise, if there are any game commands that are never used by the raytracer, (I.E. game logic,) those could be greyed out when not in Game Mode. This would simplify things greatly for Blenderheads only interested in making games, as well as set up a distinction which will probabaly be pretty handy during the next step, which is:

  2. Implement materials for game textures. Now, don’t panic. These wouldn’t be vertex colors and bump maps and generated patterns and whatnot like the raytracing engine uses. A 3D game engine material is just a series of textures meant to be used in tandem, with simple filters such as add, multiply, semitransparency, and opacity. Now, personally I’d say get rid of vertex colors altogether and make everything textured, but I’m sure some people find vertex coloring useful, and they’re delightfully efficient to draw to the screen, so I guess there’s no need to be too hasty on that one. But still. DirectX can do all kinds of things with textures, and it can do the bulk of it on your computer’s 3D video card, too. It’s just a matter of teaching Blender to talk to DirectX.

  3. Devise a system that compiles lightmaps, rather than radiosity meshes, and applys them over the top of any existing textures, as a new texture layer. It might even be a good idea to set aside the top level of however many possible texture layers you allow per material, to serve as the lightmap layer. The lightmap will end up being a huge bitmap (in JPEG or whatever format Blender feels like using) usually in the neiborhood of 1024x1024 or bigger. (If automatic visibility culling is availible by this point, it might be a good idea to use smaller 512x512 lightmaps, and just use a different one for each culling sector, rather than one big one for the entire game level.) Most of the lightmaps I’ve seen the nuts and bolts of (Blitz3D, Half-Life, gile[s] ) ended up looking like a bunch of abstract 2D white and grey shapes cut out on a black background. Each shape is a low-res “cutout” in the general shape of a group of adjacent coplanar polygons. The lightmap compile process simply cubemaps the level (or for Blender, I guess it would be, whatever meshes are selected to recieve the lightmapping treatment,) assigns each planar segment of map its own tiny scrap of real estate on the lightmap texture, and then assigns lighting values to the different pixels on the lightmap. (Note that these lightmaps are NOT necessarily to scale with the RGB textures on the same walls. Usually the lightmaps are scaled much larger, for obvious reasons. A section of wall with a 256x256 brick texture on it might only have 16x16 pixels of light and shadow overlaying it. That’s fairly blocky and fuzzy, but it’s equivilent to the kind of results you’d see after about 100 steps of radiosity solving in Blender 2.36)

Wow, that’s a mouthfull.

Anyway, I’ve made a ton of assumptions here, not least of which is the assumption that somebody exists within the blender community who would know how to program all this and actually cares to do so. I am also guilty of assuming that there isn’t already some sort of lighting solution like this in the Blender game engine, or at least a system in place for layering textures, (which would make it possible to import B3D maps lit with gile[s], BSPs, etc. or possibly even create a lightmapping script using Python.)

I hope I haven’t offended or annoyed anybody TOO much with my incessant prattling on about lightmapping. It’s just that unlit Blender games don’t look so hot, and the radiosity solution, while pretty, is not feisible for anything larger or more epic than a proof-of-concept. (And even then you’d probabaly need to demonstrate it on a computer with WAY higher specs than your target user’s machine.)

Yeahh, you talk to much!!
Now, stop whinning, go back to work and try to make a nice game/demo with
Blender or Doom3 !
Your choice

Nope, it dosent have to be too much polygons. try searching, and you will find an option to make it not subdivde the object.(Not make more polygons)

  1. Seperate the controls for the Game Engine from the controls for the Raytracing side of Blender

Agreed… from a usability POV this will improve things 10 fold for intermediate users and will go a long way to make blender more intuitive to the complete beginner.

I really think its about time this actually got done… who do i make the cheque payable too? :slight_smile:

NOR.J, could you please explain this method?

So the problem is that Radiosity bloats the polygons after calculation. The solution would be to copy the vertex colors to a less bloated mesh.

Here is what I found.

Yet, I cannot find copy vertex colors by pressing ctrl >> c.

Was this not included? Any suggestions?

ive pretty much sorted out a method where i take a low poly object that has set uv coords, subdivide it many many many times, then use radiosity with out it subdividimg it any more (this is sone by after collect meshes set MaxEl==1), then i bake the radiosity to a image file with the same uv coords as the low poly image and uv map the texture onto the low poly model with great ease.

here is a quick example of the outcome:
http://members.lycos.co.uk/legeis/downloads/radiositytex.JPG

i use 2 scripts too achieve this a texture baker z3r0_d and a little script i made to copy and paste an objects uvcoords.

http://members.lycos.co.uk/legeis/downloads/ z3d_0texbaker.py
http://members.lycos.co.uk/legeis/downloads/ uVEditor.py

I should make a tute but busy with something else at the moment :frowning:

Hope this is helpfull :smiley:

Wow. Very helpful, Siegel. Thank you! :smiley: See, this is the info noob folks need. Nobody says, “this is how you make good lighting in the game engine,” people just say “Aw, do a search for radiosity.” Well, I did a search for radiosity, and they didn’t say anything about baking the vertex details of a mesh onto another mesh with fewer vertecies. For that matter, where’s the “bake” controls, Siegel?

Well, sorry, folks. Replacing my original post with one that makes more sense. I’m bound to keep plodding along making ignorant posts until I know enough about Blender to make inteligible posts. But I’m damned if I’m gonna say nothing when I’m in a room with twenty people who know how to do something that I need to learn.

Is it possible to combine this texture light UV mapping with a layer of RGB textures? To add brown fur to the 500poly monkey, for instance?

Would it involve baking the colormap and the lightmap into one texture? I hope not. You’d loose a ton of detail in the colormpa or waste a ton of space. Every building or terrain would basically end up being a single UV-mapped object, which again, is not the way modern 3D games work. You need the flexebility offered by materials (or something simmilar to materials,) to allow cubemapped, tiled textures and UVmapped lighting on the same level mesh. If the tools to do this do not already exist within Blender, then I stand by my previous remark that Blender needs the equivilent of layered textures for the Game Engine.

That is what the z3r0_d script does open it in the text window select the object you want to texture bake and press alt+p over the textwindow to run the script.

yes, no, yes, :smiley: what you do is make a color map before hand then take your light map and in a graphics editor program like gimp or photoshop you can combine the two to get one colormapped+lightmapped texture

oh and for more realistic fur effect have a look at this
https://blenderartists.org/forum/viewtopic.php?t=43780

i am probably forgettting a few steps which is why i need to make a tutorial but like i said busy with something at the mo :wink:

you can bake radiosity

[I link to mine in the same thread… you might want to use mine because it spaces the uv islands so they don’t bleed onto each other]

  • do the radiosity calc, and hit add new meshes
  • move them to a different layer

use this baking script:
https://blenderartists.org/forum/viewtopic.php?t=44144

start out on your layer with your uv-mapped mesh, choose preserve uvs, and setup the layers [in the script] to render the layer with your radiosity solution

you’ll probably want to render as RGBA, then load your image in photoshop, duplicate the background layer, then blur the bottom one, and save again without the alpha channel [to lessen or eliminate bleeding between regions or regions and the world color]

It should be noted that Randall Rickert’s original walk through demo contains no UV mapping or Texture baking.

It is a purely radiosity solved mesh. Readers of the first post in this thread may think you need UV texturing and Texture Baking to create a similar file to the demo, you don’t.

Ricky Dee

Okay, gotcha.

I stand by the gist of my previous, confused, uninformed ramblings. The Blender Game Engine is a step backwards from modern 3D game graphics practices, and needs to be rewritten.

By combining the textures of AN ENTIRE FREAKING LEVEL into one UVmap, you lose the advantages of using tiled textures in the first place.

It’s fine for an animated mesh like a character, but it sucks for an office building interior or the coridors of an alien spaceship.

Blender is great for modelling, but it won’t be a viable game engine until its designers start thinking like game designers and start applying the tried-and-true optimization tricks that have been industry standard since Quake2.

You don’t treat the walls the same way you treat an enemy soldier. You don’t treat a lightmap the same as a texture map. You want the lights seperate from the RGB textures specifically so that one can be optimized differently than the other. You want the walls of your level to be cubemapped and tiled so that you can get close to them and they’ll still look detailed, using a variety of 256x256 textures that tile over walls several times that size. You want the lighting data to be a UV map because you want every nook anc cranny of your level to look different from every other spot after it’s lit. Most games combine both techniques by rendering the RGB textures as cubemapped and tiled, and the lgihting as a huge UVmap, layered over the RGB textures. If the Blender Game Engine can’t do this, it needs improvement.

If the best you can suggest is to make my entire level into one big UVmapped mesh, and bake everything into a single texture, then I was right all along, and Blender will need some sort of rudimentary Materials support for the game engine before it will be able to hold a candle to the lighting in realtime 3D games that came out over ten years ago.

[quote=“WarpZone”]

Okay, gotcha.

I stand by the gist of my previous, confused, uninformed ramblings. The Blender Game Engine is a step backwards from modern 3D game graphics practices, and needs to be rewritten. [/quote]
lemme see if I’ve got this right, you then go on to complain that since blender offers nothing in the way of multiple uv coordinates and ways to apply multiple textures to a surface that it is completely useless?

do you want to help re-write it? … perhaps it could leverage some of the existing decent open source 3d engines [like ogre]*

yes, the modern practice is to use lightmaps [or dynamic shadows maps somewhat like blender’s render, which only some hardware supports], and blender’s game engine is very lacking in that respect.

blender’s game engine hasn’t had graphical changes… since, well since it was introduced in blender 2.0x [I wasn’t using blender at the time, that was 2000 or so], or perhaps you could consider the major changes [no more sectors! game engine is c++…] in 2.25 [in 2002]… 3 years ago… It was pretty old then too, quake 3 came out in '99

we aren’t forcing you to use the blender game engine, we know it is completly behind the times with graphics features and performance. The cool stuff it has which isn’t common elsewhere is the logic brick setup, which admittedly needs a lot of work to reduce the need for python and fix bugs.

… I don’t really think it is worth trying to convince you that sometimes you have to deal with POS hardware which only barely supports more than blender does in the game engine respect.

the proposals to split gameblender into a seperate project exist because the state gameblender is in is really bad [lots of bugs, real behind the times, lots of quirks…]. If it is a seperate project it can be more easily developed, new features can be added, and hopefully more developers become interested and it might actually become something which puts blender ahead of some other applications. Currently gameblender seems like a toy, and I don’t mind that.

out of curiousity, how would splitting the GE into another application make more developers want to work on it? wouldn’t that just make it easier to shove it into a corner to collect dust?

isn’t it already nearly abandoned? has it changed much since, what? 2.00 [2000]? 2.25 [2002]?

it being a seperate project would make it that much easier to rebuild from scratch, or to make fundamental changes in [make it use ogre and ode instead of its own thing [probably want to keep SOLID for collision detection], redesign or replace the logic brick setup, make logic bricks act as a way of describing transitions or actions in states of a state machine instead of how they are now [acting more as reflexes]… any number of things]

I’m another newbie here, so I don’t know how old a topic has to be before it’s officially “dead”…

Doesn’t CrystalBlend qualify as a version of the GE that is separate from the rest of Blender?

Mabe if it were finished…