Idea for shadow baking script (maths guru needed!)

I’ve been thinking about a way to create a script to bake (static) shadows into a UV mapped image for applying in the game engine. I think the algorithm below should work, but are there any maths gurus that can help me with calculating the orientation of the camera? Given a face normal i want to be able to rotate the camera such that it is exactly perpendicular to the face.


Assign UV coords for all meshes
set up and light the scene as required
set world background to white
create 2 materials - one white/shadow only, one invisible
apply invisible material to all meshes in scene

for each mesh:
    for each face in mesh:
        apply shadow material to this face only
        calculate the 2D bounds of this face in UV map
        set render dimensions to match
         # THIS IS THE TRICKY BIT \/
        move an ortho camera perpendicular to face so it exactly fills render area
        render to image
        copy this face to shadow map at appropriate location (using PIL)

multiply the colour UV map by the shadow map

Although this sounds slow; calling the renderer for potentially thousands of faces, each individual image will be just a few pixels square and will render in a fraction of a second. It should take only a couple of minutes, even for fairly high polygon scenes.

any takers?

Thats exactly what some guy from serbia proposed in some other forum.
If i remember it properly it was Blender3d.org. Guess that will work. Standard
math taken from any better book that will be no rocket science. But you might
have to compute some faces beeing in the way of the target face and making them transparent in python to keep a constant distance. When i saw his post there it was orphaned. Sttrange for such a good idea. I am pretty sure with
your experience it can be done.

searching blender.org reveals this: http://www.blender.org/modules.php?op=modload&name=phpBB2&file=viewtopic&t=5548&highlight=shadow+map

pretty much exactly what i imagined, only better as i hadn’t considered lightmaps as well as shadows. doesn’t look like it got completed though so i may have a go.

To move the camera perpendicular to the face is easy.

I have in my matrix extrude script the math that does that. http://sulley.dm.ucf.edu/~pope/blender/matrixExtrude_1.0.py

The hard part comes from how far do you put the camera to get the face. I hope that little bit at least helps.

I’m curious as to how this goes. Good luck.

That’s the easy part :)- it can be arbitrary, lets say its 2 units away from the target polygon. Then set the camera clipping planes very close to that distance, like 1.999 and 2.001, and the target polygon will be isolated. Well, that and any adjacent coplanar polygons.

The real hard part is taking care of the seams between images. I have a feeling they’d be all over the place.

Also the images will need an alpha channel with the polygon’s shape so when it wont overwrite the surrounding areas when added to the shadow map. One way to do that would be to render another scene with just that polygon with a shadeless material, and let that be your alpha channel for the shadow render.

Finally, multiplying the color uv map by the shadow map wouldn’t be the best thing- you’d be locking yourself to the resolution and colors of the color map, and thereby lose a lot of flexibility. A better way would be to add the shadowmap as another layer in the material- this way it could modulate diffuse, specular, etc, and you could control the strength of the effect easily

Anyways, good luck with this project! It’s something i’d love use

thanks, there’s some really useful feedback there. i think i have the information i need to start something.

You may want to look at the source code of some lightmappers,they are
mainly in cpp but i dont think that will be an obstacle for you. Though you wont be able to reuse the code you can make sure to have all the necessary
steps included. Get an UML-viewer from sourceforge or elsewhere for that
purpose it might come in handy. Fsrad, lmtools, lightmapper. I even have one
in pascal if you prefer that, i will mail the source to you. Same if you cant find
some others. I am looking forward to your work. It will be awesome despite
the lack of multitextures and second uv-set. Imagine we had that, your work would almost be a breeze. No ugly seams anymore, with second uv-set, see
Lightwave.

Hum, I’ve thought about this before (have a look at this message’s subject) :wink:

would the following algo work:



meshes have UVmap
find all lamps positions
NumLamps= Number of lamps in scene

for mesh in scene:
    for each pixel of uvmap in mesh (shameless shortcut):
        get 3Dspace position of the pixel (as average)
        visibility = lamp_visibility(pixel_position)
        store visibility as a fraction of total lamps in the same [x,y] position 
                                          of original pixel, in same size buffer

lamp_visibility(pix_pos):
#this involves raytracing stuff, but this version is a dirty one.

     visible = NumLamps
     for each lamp:
        get vector-coords of lamp_to_pixel ray-segment.
        divide segment length into N sub-segments (depth sampling)

            #first dirty filter:
            for each sub-segment in ray:
                make a cube which has the sub_segments' size as parameter
                                   and that is centered on the N sub-segment         
            for each cube:
                LOV = a List Of the Verts that are in those cubes.


            #second filter:
            #optionally re-divide the subsegments for more precision.
            #The division of the initial ray-segment creates points along
            #it. let's call these points TP (for Test Points), redividing 
            #the sub-segments creates more TPs.
            #Here we'll use an occlusion radius. If the distance 
            #from a vertex to a TP is smaller than the occlusion 
            #radius, then there is occlusion and the loop can stop.

             for each TP:
                for each vert in the LOV:
                    if (vert_pos - TP_pos) < occlusion_radius:
                       #takes off one lamp from the number of visible ones:
                        visible = NumLamps - 1
                        break

            #all this is meant to return a non-occlusion float. 
            #If no lamp is occluded, it returns 1, 
            #if all lamps are occluded, it returns 0 
            #and if one lamp out of eight (example) is visible, is returns 1/8

        return (visible / NumLamps)

This would return a grey-level shadow map, for every uvmapped object. No transparencies, no soft shadows although these could be tricked using gaussian blur and the relative distance of the occluding vertex to the map’s pixel compared to the pixel-to-lamp distance, or something like that.

i.e. for an [x][y] pixel in the shadow map you do not only store the amount of light it receives, you also store the (pixel_to_occluder)/(pixel_to_lamp) result.

let’s say:
pixel_pos = 0
occluder_pos = 2
lamp_pos = 10
visibility = 1lamp out of 8

pixel = (incident_light_factor, occlusion_factor)
pixel = (0.125 , 0.2)

Then when you run the gaussian blur, the radius of the filter for one pixel will be altered by the occlusion_factor, for that specific pixel, on that specific map.

Last step is to mix up the shadow map to the color map…

Ugh, I just re-read quickly… looks too complicated… but who knows?
this may be slow…

say you have ten 1000 vertices models, with each one a 512*512 map, and eight lamps.
that makes 2 621 440 pixels to evaluate.
for each pixel you test each lamp, and for each lamp you have to go through 10 000 verts at least (the first filter might be redundant).
And as they are N subsegments, there are N+1 TPs… say N = 5
that makes 2 621 440 * 8 * 10 000 * 6 = 1 258 291 200 000 vertices will be tested… that’s a lot no?

There are surely ways to make things faster…

Dani

What you’re trying to do sounds very similar to something that GL already can do using the stencil buffer. To render a scene with shadows, you first
render from the light source. Then, using the results of that render, you render from the camera. Two renders total and accurate shadows (albeit very sharply defined). The neat thing is that it is fast enough for games.

(As I recall, you can chain the first type of renders together to produce softer shadows [i.e. by oversampling] at the expense of framerate.)

Try googling “stencil buffer shadows” and you might find something that will help. I’ve never done this myself… but it doesn’t appear too difficult to set-up.