real time radiosity, has it been done? is it even possible?

So, SSGI (Screen Space Global Illumination), wich is a term that i (wrongly) use to refer to real time diffuse radiosity.
I have been fascinated with this topic ever since i heard about some demos doing diffuse GI in real time. at the time i didn’t really think about how it worked or what could be the limitations. but it seems to me that with today’s technology we could get something like this running in blender via a custom ‘2d filter’.

if you don’t know what i mean check this out : https://www.youtube.com/watch?v=Dd8yMPZzWfE

I’d like to work in something like that but my GLSL knowledge is VERY limited and i dont want to work on a project that’s absolutely impossible to complete if it’s gonna take not only the time to complete the project but also learn a shading language.

After thinking about it i figured that for each pixel on the screen we would need to iterate trough every single pixel on the screen, if we wanted perfect results.


for(int x = 0; x<WIDTH;x++){
 for(int y = 0; y<HEIGHT;y++){
  //stuff
 }
}

wich as you may guess is a bit over the top and kind of impossible when it comes to running this on today’s hardware.
But what if instead of running through every single pixel we skiped 25 or 50 pixels between each pixel?


uniform sampler2D sampler;

const float RAD_AMOUNT

vec2 currentCoords = vec2(0,0);
vec2 texcoord = vec2(gl_TexCoord[0]).st;

vec3 pPos = vec3(texcoord,/*how do i get Z depth?*/);
vec3 pNormal = vec3(/*how do i get normals?*/);
vec4 radcolor = vec4(0,0,0,0);

void main(){
 for(int x = 0; x < texcoords.x ;x+=50){
   for(int y = 0; y < texcoords.y ;y+=50){
    currentCoords = vec2(x,y);
    newcol = texture2D(sampler, currentcoords);
    
    vec3 currentPos = vec3(currentCoords,/*here is where i would get the Z depth*/);
    vec3 lightDir = pPos - currentCoords;
    
    radcolor += newcol * cos(anglebetween(lightDir,pNormal)); /*just imagine that angle between is a function that is defined and returns a float that is the angle between two vec3's*/
   }
 }
 gl_FracColor = radcolor * RAD_AMOUNT + texture2D(sampler, texcoord);
}

(you should treat this as pseudo-code, as i am pretty sure this is not how all of this works)

granted, this won’t be as accurate but it could make the filter run in aproximately a 2500th of the time on each frame!

However, a game running at 10801920 would still have to do (1080/50)(1920/50)=831 calculations PER PIXEL , for a total of 1,723,161,600 per frame, wich would still make many graphics cards fall to their knees.

however, this is not thinking smart. What i showed above is a very bruteforce-y method wich, even when brutally optimized, is pretty bad. instead what could be done is running a clustering protocol on each frame, getting about 50 points scattered around the screen wich represent the average position and color of the visible pixels and using that instead! Or not… Because blender’s 2d filters are fragment shaders, wich means they run once per pixel and not once per frame(at least i think so).

And to run, this would require the Z buffer and the normal Buffer, wich i’ve heard is not possible to use in a 2d filter(please correct me if i’m wrong or if you know a workaround).

In this post i present some of the technicalities of creating real time SSGI (more like diffuse radiosity really, or to put it in a single acronym RTSSDR :stuck_out_tongue: ) in blender. And now i ask you, the comunity to help for any tips of information you might have about real time radiosity in blender and/or how to implement it

Good luck and take care.

step 1 create grid of static cube map reflection probes

step 2 have actors on a layer seperate to static objects

step 3 realtime cube maps (upbge) are blended with static maps based on point in grid, this map is used in gobal illumination as well as real time reflection

as for an actor glowing, you can use a point lamp, or even a textured area light soon [upbge]

There are tons of ways to do real-time GI these days, some more accurate than others. VXGI, light cuts, and VPLs are the cutting edge at the moment. Unfortunately, none of them are even close to simple to implement in an efficient AND robust way for games. The Tomorrow Children is probably the most ambitious big-name use of high quality real-time GI yet, and there’s a fantastic technical overview for how they did it here. Several others in this generation have gone with similar approaches, but mostly rely on static worlds to make it work. Anyway, there’s no shortage of literature out there. Doing a quick search of Ke-Sen Huang’s repository of graphics white papers will give you plenty of reading material for the next few weeks if you should desire it.

Oooh, seems like a nice resource, i am on vacations since yesterday so i have plenty of time to read. Thanks!

I am not really interested in using UPBGE. The reason for attempting this is mostly for the knowledge gained than for the final result and i dont plan on making a game on the BGE anytime soon. Even if that was my target this approach wouldn’t work for me as light proves are ussually just normals based and don’t care about location (beesides that of the light prove) so if i wanted a set of objects to recieve indirect lighting from each other and they weren’t directly exposed to each other then i would either have to place an enormous amount of light proves or accept that the lighting will be completely incorrect(wich isn’t good enough for my cynical self).

I’ve tried this before, and I managed to create the very basic of the SSGI, but I think i’m not going to develop it more further though. Yes, It’s possible to create a SSGI in BGE, but I thought, why create a SSGI filter if we could just fake those? You could see my SSGI work on my signature.

No, there is a default Z-Buffer in bge. Import it with


uniform sampler2D bgl_DepthTexture;

and you could transform the Z depth buffer into a Normal Texture with Reconstructing.

Ok i’ve been working on something for a few hours and i got something kinda working. I used this (http://devlog-martinsh.blogspot.com.ar/2011/11/basic-deferred-shading-system-in-glsl.html) deferred shading filter as a base and then built upon it.

This is what i got so far:



As you can see it (sort of) works. The bottom of the spheres are getting some light and so is the ceiling. However it seems to only be getting lighting information from the four pixels that are exactly on the corners, wich is quite problematic and i can’t figure out why. as an added bonus the top right corner pixel brightens up the image hundreds of times more than the other corners, wich leads me to believe that the calcuations from every iteration are getting bunched up in that corner ,for some reason. If you want to take a look at the code (and hopefully fix it, wich if you do please let me know what i did wrong) feel free to do so. Here it is:


uniform sampler2D bgl_RenderedTexture;
uniform sampler2D bgl_DepthTexture;
uniform float bgl_RenderedTextureWidth;
uniform float bgl_RenderedTextureHeight;


float width = bgl_RenderedTextureWidth;
float height = bgl_RenderedTextureHeight;


vec2 texCoord = gl_TexCoord[0].st;
vec2 screenCoord = vec2(0,0);


vec4 direct = vec4(texture2D(bgl_RenderedTexture,texCoord));


vec4 bounced = vec4(0,0,0,1);
vec4 sample = vec4(0,0,0,1);
/* martin start */
vec2 canCoord = gl_TexCoord[0].st;
float aspectratio = 1/1.777;
float znear = 0.1; //camera clipping start
float zfar = 50.0; //camera clipping end
float getDepth(vec2 coord){
    float zdepth = texture2D(bgl_DepthTexture,coord).x;
    return -zfar * znear / (zdepth * (zfar - znear) - zfar);
}
vec3 getViewPosition(vec2 coord, vec2 cancoord){
    vec3 pos;
    pos =  vec3((cancoord.s*2.0-1.0),(cancoord.t*2.0-1.0)/aspectratio ,1.0);
    return (pos*getDepth(coord));
}
vec3 getViewNormal(vec2 coord, vec2 cancoord){
    vec3 p1 = getViewPosition(coord+vec2(1.0/width,0.0), cancoord+vec2(1.0/width,0.0)).xyz;
    vec3 p2 = getViewPosition(coord+vec2(0.0,1.0/height), cancoord+vec2(0.0,1.0/height)).xyz;
    vec3 dx = p1-getViewPosition(coord,cancoord);
    vec3 dy = p2-getViewPosition(coord,cancoord); 
    return normalize(cross( dx , dy ));
}
/* martin end */


void main(void){    
    float depth = getDepth(texCoord);
    vec3 viewPos = getViewPosition(texCoord,canCoord);
    vec3 viewNormal = getViewNormal(texCoord,canCoord);
    vec3 lightDir;
    vec3 viewPos2;
    float amount;
    for(screenCoord.x = 0; screenCoord.x < width; screenCoord.x += 50){
        for(screenCoord.y = 0; screenCoord.y < height; screenCoord.y += 50){
            if(screenCoord.x != texCoord.x && screenCoord.y != texCoord.y){
                viewPos2 = getViewPosition(screenCoord,canCoord);
                lightDir = viewPos - viewPos2;
                amount = dot(lightDir,viewNormal)/(length(lightDir)*length(viewNormal));
                sample = texture2D(bgl_RenderedTexture,screenCoord);
                bounced += sample * amount;
            }
        }
    }
    
    bounced = bounced /((width*height)/ 2500);
    
    gl_FragColor = direct+bounced;
}

EDIT: After some debugging i found that the script doesn’t seem to care at all about how big or small of a step size i use for screenCoord.y is, wich is… odd. and it isn’t related to the order of the nested loops