One of my main complaints about the BGE has always been about the lighting. More specifically, the lack of flexibility for game engine lighting. I’m going to discuss some of my ideas for improving BGE’s lighting system in this thread.
BGE basically allows only for one lighting set-up that is applied to all objects in the scene. You can get around this by using layers, but lights are still computed even for objects on different layers on them, and the layers system still doesn’t offer nearly the flexibility I’d like. There is no way to move objects between layers in-game to change the lighting(that I know of).
Why is this such a problem? Consider the following scenario.You have an outdoor scene that includes a bright, sunny field that transitions directly into a dense, dark forest. You’d want everything in the field to be lit by a bright sun lamp and ambient light, and everything under the forest to be lit by a dark, subtle hemisphere/ambient light. Dynamic shadows aren’t an option, as the scene is too large to get a decent quality and range. When your character moves from the field to under the trees, how are you going to stop the sun from shining on them? You can’t turn off the whole sun lamp, and certainly couldn’t make duplicate lights and layers for every object that needed adjustable lighting- that would kill performance.
Getting to the point:
In nearly every modern game, you’ll find some instance where the lighting changes on a per-object basis. Adding such a feature to BGE would be very useful in many different cases. Here’s what I think should be done, in three points:
Support a large number of lighting groups(There aren’t enough layers. Imagine a scene with hundreds of lights that only influence objects immediately around them(which are placed in the same lighting group). You’d need a lot of groups), each light belongs to a single group, but each object can belong to many. Objects can change between lighting groups in the middle of a game, and lighting calculation are only performed for lights in the same group as the object being rendered(either by recompiling the shaders every time an object switches light groups, or by adding conditional statements within the pixel shader, which seems like it would probably be slower).
Going back to the previously described scene, when your characters walks out of the field and under the trees, you might switch their lighting group. However, the transition between light to dark would be instantaneous, and would look quite unnatural. This could be solved by blending between the two groups of lights, but that would require computing the lighting for all the lights in both groups(bad for performance). My solution to this is to use only one light group for both the forest and field, and instead of blending between two groups for a smooth transition, blend between two sets of colors and intensities for a single light group. For example, for your light setup you would have a sun lamp and two opposing hemisphere lamps for basic outdoor lighting. You would be able to specify two sets of colors and intensities for that light group. For the forest, the hemi lamps would be darker, and the sun’s intensity would be close to zero. Then, using logic bricks or python you could tell BGE to blend between the groups as you wish(either by directly specifying the blend factor or by setting a “transition time” and signaling the BGE to transition between the two “color groups”).
I’m not much of a programmer, but I think think this could be effectively implemented by storing all the possible color and intensity values for each lamp as well as the blending as constants in the shader, and updating the blending factor before drawing each object that uses it(each object would have it’s own blending factor).
3.Your terrain in the forest/field scenario would likely be composed of very large objects, too large for the per-object lighting described above to be effective for creating a transition from the bright to dark areas. The solution? Light mapping(already possible in the GE. For best results, you’d create two light maps: a shadow map that masks out the sunlight, and an AO/GI map that either muliplies with the shading color(in the case of AO) or adds itself to it(in the case of GI). AO or GI could be easily done by setting the blend mode to add or multiply. But not for the shadow map. The shadow map currently can only multiply itself with the object’s base color, and not mask out the sunlight. This is a problem because a sun lamp is highly directional. Surfaces that face away from the sun would look very dark, while surfaces that face toward it would look much lighter. The results would be incorrect and look off. Which brings me to this: textures should be able to mask the contribution from a single, specific light for proper shadow mapping.
Wow… that was much wordier than I expected. Hopefully this post will at least get some discussion started on these issues. I think my ideas would be quite feasible to add to the GE, but I’m keeping my expectations realistic. I don’t expect any of this to be added to the GE, at least anytime soon. Thanks for reading.
I read the whole thing
I think first we need
A) Point lights can cast real-time shadows
B) A directional light type that can cast real-time shadows, so you could potentially have a massive terrain with real-time shadows.
I’m not familiar with graphics pipelines and such, but here my speculation:
I hear the BGE has to render a full frame for every light in the scene, what if it combined all lights into one frame? So if you have 50 lights in your scene, instead of having to render 51 frames, it would only render 2.
A is already supported in the BGE (though it doesn’t work with all computers, even glsl capable ones like mine) and B has been implemented in Project Harmony. However, Harmony isn’t currently scheduled to be added to the official Blender releases. It’ll come in time, though.
About BGE rendering the whole scene for each lamp, I’m not totally sure on this, but I think BGE combines all the lights into a single shader, and renders in just one pass. The multi pass approach you were referring to is often used in 3D applications, but I don’t think that’s what Blender uses.
You make an interesting point, laser blaster. Here’s what I understand of the situation.
Shadows should work on GLSL capable computers. It might help to install your latest graphics driver. Is your card an integrated Intel card? Integrated cards are usually slower and weaker than other cards.
I believe the BGE does, at least, render the scene once for each shadow-casting lamp, which makes sense.
You could use a shadeless material for your shadow map. This way, the affected objects are shadeless, but would appear shaded because of their texture. You could also use a map and assign it to the Emit value - the dark areas would be unaffected, and the light areas would be lit up (or even shadeless, basically). This way, the faces that face away from the sun, in your example, would be lighter, depending on the colors on the Emit map.
You could ‘shade’ an object by checking the shadow map texture with the bge.texture module to find out what the object’s color should be, and then altering the object.color variable to reflect this.
For the lights, deferred or inferred lighting would help. Otherwise, what I would do is create a series of empties that represent light values (color, intensity, direction (orientation of the empty), etc). Then, loop through the empties, find the closest ones to the camera, and move lights to those positions. This way, you don’t have to create or delete lights (which isn’t possible in Blender Trunk, anyway), and can have minimal FPS lost on lighting.
Before we can have any realtime shadows or advanced lighting we need the code to be optimized and that can be done by achieving inferred lighting, like SolarLune said!(harmony).
So we might have to wait untill Moguri and Kupoman finishes inferred rendering mode!
SolarLune: Those are some good ideas. I would prefer not to have to use a shadeless material, because that means I can’t use normal mapping. It actually wouldn’t be too difficult to write a glsl shader that allows for proper shadow mapping, so I could do that if need be. Using the object color variable to control the lighting is a great idea, and if there’s some way to access object color in a glsl script, I could do a lot of different things with it. For instance, I could store an ambient lighting color in the RGB channels and the sun lamp intensity in the alpha channel.
Deferred/inferred lighting would offer a nice speedup and allow for a lot more lights, but that would come at a cost. It depends on the specific implementation, but generally deferred lighting allows for a lot less flexibility in the shading models used. That means it would be incompatible with custom glsl scripts, and things like cel-shading may end up impossible to do(or at least the options would be limited and basic).
So, although inferred lighting would be a welcome improvement, I think improving the forward lighting system should also be a focus. At the very least, we need to get an efficient implementation of light groups(layers are far from efficient right now, and there aren’t enough of them).
Maybe instead of adding “light color groups”, as I suggested in my first post, it would be more reasonable to ask for per-object variables(like object color) that can be accessed by glsl scripts(as it is, you’d have to make a new material and script for each object that you wanted to use unique variables, which would be pretty bad or performance)? Then per-object lighting would become a possibility for anyone willing to write a custom glsl script, and adding that functionality should be a pretty simple task for the devs.
I don’t think that you’d have to make a separate material for each object (do you? I’m not sure). As for the separate script problem, if the scripts between two objects are the same except for some minor changes, then you could write a function to return a script that is custom-tailored to fit the object. What I mean is that are two methods of sending data into a 3D GLSL shader.
Using uniform variables. This allows you to change the shader in real-time, like transforming the vertices of the mesh based on time, or whatever.
Using a function to return a customized script. These can’t be changed in real-time, but you can customize it to fit the particular object that you’re working with. Here’s an example.
This is not a correct example, but it’s a general example that should work. Basically, you can interrupt the shader script’s string to provide a variable for an argument. The function would return the customized script. This wouldn’t work for changing the values in real-time, but the shader would work pretty much the same across all objects, and with the minor differences that are necessary. I use this method myself for my screen filter library - this way, I can have a centralized screen filter library that will work well across all games that I work on, and which I can easily edit and change.
Thanks for trying to help me out. I’m pretty sure you’d have to make a new material for every object, because I think you can only set glsl shaders on a per-material basis. There isn’t any way to do that dynamically in the GE, is there? If there is, then I would give it a shot.
I think object color gives me the freedom that I need, for now. Do you have any idea how to access object color from a glsl shader? There’s got to be some way to do it, because I doubt the Blender generates a unique shader for every object using ObColor.
Now that I think about it, I suppose you’re right - the shaders are linked to the materials. Hm. Well, I’m not sure how to get the object color from a GLSL shader, but you can pass the values through the shader variable. Can’t remember the name for it, though. There was a specific page on the Blender wiki that talked about this… In the API, the function’s called setUniform4f. You can use it to pass 4 float variables (the color amounts) to your shader script.
Unfortunately, uniform variables can only have one value at a time per shader, whereas object color can have one value for each object. I assume that there has to be a way to access object color, simply because the default shaders that Blender generates are able to access it. I don’t need to know right now, because I’m currently focusing on programming, not graphics. I’ll make a thread about it in the future if I need to.
Anyway, It’s absolutely fine if Kupoman, Moguri, and any other BGE devs I don’t know about have higher priorities than light groups. Any progress at all is good progress. But at some point in the future, I would like to see light groups and some form of per-object lighting considered. Does anyone else feel the same way, or do these features rank pretty low on your BGE wish list?
At the moment i will be more than glad to see inferred lighting getting done and into trunk.
About light groups i consider them as medium not a high priority.
I already started to “hate” blender because of lighting, its getting harder and harder to keep nice visuals with such limited features and low performance!
About having not so much flexibility: Does it really matter if we cant have cell shading or other custom shaders? Not many users really use custom shaders. I guess what we all want is performace and multiple lights (a scene with 30 lights, imagine that)