GLSL Shadow Map Access?

Hi All,

I am working on a GLSL shader that I want to apply to all mesh objects in the scene. It is a phong style shader and it works fine all by itself. Imagine a scene, a monkey head above a plane. The monkey casts a shadow on the plane. However, when I apply my shader to the plane, the shadows disappear. I assume this is because my shader is not taking shadow information from lights into account.

How do I gain access to the Blender shadow map inside my GLSL surface shader?


Unfortunately you can’t. This is something that needs really needs to be addressed (though I guess you could work around it with something like PyOpenGL).

As a user, how would you like for this work?

I have the same question going on the OpenGL forum, here.

This user seems to have a good knowledge of how it could possibly work.

<b>uniform</b> <b>sampler2DShadow</b> ShadowTexture;    <i>// If Blender enables HW depth comparisons/filtering</i> <b>uniform</b> <b>sampler2D</b>       
ShadowTexture;    <i>// If Blender does not and expects the shader to do it


<b>shadow2D</b>() or <b>shadow2DProj</b>()    <i>// If HW depth comparisons/filtering</i> <b>
texture2D</b>() or <b>texture2DProj</b>()  <i>// If not

But I am wondering…the default BGE can cast shadows, so there must already be something in place to achieve this?

Yes, Blender already has something in place. The problem is you are over riding that by defining your own custom shader. It would be nice if a custom shader can merge with some aspects of the built in shader.

Another problem is that the custom shader system currently does not really handle light data well. Perhaps this could be fixed by changing Blender to pass in a list of light structs (with one of the items in that struct being that light’s shadow buffer) if the user specifies the correct uniform(s).

But if it is already in place, shouldn’t it be accessible? Is that what is missing? We can already access gl_LightSource but it does not seem to contains any shadow information. It would be nice if it worked like RSL (Renderman Shading Language). You just make a call to a function called shadow() and passit a map and you get a boolean value back that tells you if your pixel is in shadow. This requires , of course, a shadow pass to generate those maps. But if Blender could simply provide a map, on-the-fly from each shadow enabled light (we are only talking about 8 maximum, right?) that could be forwarded to this shadow() function for processing.

I found this fragment code here:

float far = gl_LightSource[0].linearAttenuation;
float near = 0.1;
float a = far / ( far - near );
float b = far * near / ( near - far );
float shadowMapResolution = gl_LightSource[0].quadraticAttenuation;
float psx = 1.0 / (shadowMapResolution * 4.0);
float psy = 1.0 / (shadowMapResolution * 2.0);
smcoord.x /= -smcoord.z/0.5;
smcoord.y /= -smcoord.z/0.5;
smcoord.x += 0.5;
smcoord.y += 0.5;
smcoord.z = a + b / smcoord.z;
float shadowcolor = shadow2D(ShadowMap,smcoord).x;

But I am not sure where ShadowMap comes from?

I am not fond of the idea of relying on the fixed function uniforms. It would be more flexible for the BGE to pass in the light data itself. The tricky thing with passing in the light data is that you can not declare a variable length array in GLSL, and I would prefer not to put any restrictions on the user by hard-coding an array size. Perhaps the BGE could pass in lights until it can not fit any more into the user specified array, or it is finished passing in lights.