So, SSGI (Screen Space Global Illumination), wich is a term that i (wrongly) use to refer to real time diffuse radiosity.
I have been fascinated with this topic ever since i heard about some demos doing diffuse GI in real time. at the time i didn’t really think about how it worked or what could be the limitations. but it seems to me that with today’s technology we could get something like this running in blender via a custom ‘2d filter’.
if you don’t know what i mean check this out : https://www.youtube.com/watch?v=Dd8yMPZzWfE
I’d like to work in something like that but my GLSL knowledge is VERY limited and i dont want to work on a project that’s absolutely impossible to complete if it’s gonna take not only the time to complete the project but also learn a shading language.
After thinking about it i figured that for each pixel on the screen we would need to iterate trough every single pixel on the screen, if we wanted perfect results.
for(int x = 0; x<WIDTH;x++){
for(int y = 0; y<HEIGHT;y++){
//stuff
}
}
wich as you may guess is a bit over the top and kind of impossible when it comes to running this on today’s hardware.
But what if instead of running through every single pixel we skiped 25 or 50 pixels between each pixel?
uniform sampler2D sampler;
const float RAD_AMOUNT
vec2 currentCoords = vec2(0,0);
vec2 texcoord = vec2(gl_TexCoord[0]).st;
vec3 pPos = vec3(texcoord,/*how do i get Z depth?*/);
vec3 pNormal = vec3(/*how do i get normals?*/);
vec4 radcolor = vec4(0,0,0,0);
void main(){
for(int x = 0; x < texcoords.x ;x+=50){
for(int y = 0; y < texcoords.y ;y+=50){
currentCoords = vec2(x,y);
newcol = texture2D(sampler, currentcoords);
vec3 currentPos = vec3(currentCoords,/*here is where i would get the Z depth*/);
vec3 lightDir = pPos - currentCoords;
radcolor += newcol * cos(anglebetween(lightDir,pNormal)); /*just imagine that angle between is a function that is defined and returns a float that is the angle between two vec3's*/
}
}
gl_FracColor = radcolor * RAD_AMOUNT + texture2D(sampler, texcoord);
}
(you should treat this as pseudo-code, as i am pretty sure this is not how all of this works)
granted, this won’t be as accurate but it could make the filter run in aproximately a 2500th of the time on each frame!
However, a game running at 10801920 would still have to do (1080/50)(1920/50)=831 calculations PER PIXEL , for a total of 1,723,161,600 per frame, wich would still make many graphics cards fall to their knees.
however, this is not thinking smart. What i showed above is a very bruteforce-y method wich, even when brutally optimized, is pretty bad. instead what could be done is running a clustering protocol on each frame, getting about 50 points scattered around the screen wich represent the average position and color of the visible pixels and using that instead! Or not… Because blender’s 2d filters are fragment shaders, wich means they run once per pixel and not once per frame(at least i think so).
And to run, this would require the Z buffer and the normal Buffer, wich i’ve heard is not possible to use in a 2d filter(please correct me if i’m wrong or if you know a workaround).
In this post i present some of the technicalities of creating real time SSGI (more like diffuse radiosity really, or to put it in a single acronym RTSSDR ) in blender. And now i ask you, the comunity to help for any tips of information you might have about real time radiosity in blender and/or how to implement it
Good luck and take care.