Hum, I’ve thought about this before (have a look at this message’s subject)
would the following algo work:
meshes have UVmap
find all lamps positions
NumLamps= Number of lamps in scene
for mesh in scene:
for each pixel of uvmap in mesh (shameless shortcut):
get 3Dspace position of the pixel (as average)
visibility = lamp_visibility(pixel_position)
store visibility as a fraction of total lamps in the same [x,y] position
of original pixel, in same size buffer
#this involves raytracing stuff, but this version is a dirty one.
visible = NumLamps
for each lamp:
get vector-coords of lamp_to_pixel ray-segment.
divide segment length into N sub-segments (depth sampling)
#first dirty filter:
for each sub-segment in ray:
make a cube which has the sub_segments' size as parameter
and that is centered on the N sub-segment
for each cube:
LOV = a List Of the Verts that are in those cubes.
#optionally re-divide the subsegments for more precision.
#The division of the initial ray-segment creates points along
#it. let's call these points TP (for Test Points), redividing
#the sub-segments creates more TPs.
#Here we'll use an occlusion radius. If the distance
#from a vertex to a TP is smaller than the occlusion
#radius, then there is occlusion and the loop can stop.
for each TP:
for each vert in the LOV:
if (vert_pos - TP_pos) < occlusion_radius:
#takes off one lamp from the number of visible ones:
visible = NumLamps - 1
#all this is meant to return a non-occlusion float.
#If no lamp is occluded, it returns 1,
#if all lamps are occluded, it returns 0
#and if one lamp out of eight (example) is visible, is returns 1/8
return (visible / NumLamps)
This would return a grey-level shadow map, for every uvmapped object. No transparencies, no soft shadows although these could be tricked using gaussian blur and the relative distance of the occluding vertex to the map’s pixel compared to the pixel-to-lamp distance, or something like that.
i.e. for an [x][y] pixel in the shadow map you do not only store the amount of light it receives, you also store the (pixel_to_occluder)/(pixel_to_lamp) result.
pixel_pos = 0
occluder_pos = 2
lamp_pos = 10
visibility = 1lamp out of 8
pixel = (incident_light_factor, occlusion_factor)
pixel = (0.125 , 0.2)
Then when you run the gaussian blur, the radius of the filter for one pixel will be altered by the occlusion_factor, for that specific pixel, on that specific map.
Last step is to mix up the shadow map to the color map…
Ugh, I just re-read quickly… looks too complicated… but who knows?
this may be slow…
say you have ten 1000 vertices models, with each one a 512*512 map, and eight lamps.
that makes 2 621 440 pixels to evaluate.
for each pixel you test each lamp, and for each lamp you have to go through 10 000 verts at least (the first filter might be redundant).
And as they are N subsegments, there are N+1 TPs… say N = 5
that makes 2 621 440 * 8 * 10 000 * 6 = 1 258 291 200 000 vertices will be tested… that’s a lot no?
There are surely ways to make things faster…