(SSR) Screen-Space Reflections Shader v0.7

Do what HG1 told you and it’ll work fine.

By the way thank you HG1 for answering for me, i was at school

How do i find Width and length of my camera?

30 fps on my gt 755m laptop, no wonder your screenshots dont have the profiler. :eek:

camera settings:


Under the render tab, use the embed player resolution.

Nah, that’s not the reason, i am just very new to working in the bge, i don’t know how to activate it xD. Besides, I am using a laptop with intel integrated graphics so i figured the fps wouldn’t be relevant to anyone with a dedicated graphics card. I’ll test this in my main rig and see how it works…

Also, I just uploaded a new version with some optimizations (mainly scaling the reflection ray depending of its distance to something) so maybe that’ll make it run better? Also you should try lowering the quality of the reflections. (change the values between lines 22 and 24, line 24 is for the new feature, set to 1.0 to disable).

This new feature took me from an avg. of aprox. 4.5 fps to an aprox. avg. of 15 fps on a test scene i made (again, on an intel integrated graphics chip).

(attached is my test scene)

Attachments

SSRR.blend (628 KB)

1 Like

This is one of the coolest 2dfilters I’ve ever come across. This would be great for weather effects for it raining

Yeah this is so cool

Your shader doesn’t work for me with a fov other than 90… (and yes, I also set zfar and znear to the correct values)
You asked in this thread how to retrieve the view projection matrix and how to get the view vector, I made a SSR shader (the code is not perfect but it does what it should do) using the VPM and the view vector+a better getViewNormal() function (no ugly rims around objects anymore) so if you want to make a shader using any of these functions, feel free to use them :’)

uniform sampler2D bgl_DepthTexture;
uniform sampler2D bgl_RenderedTexture;

uniform float bgl_RenderedTextureWidth;
uniform float bgl_RenderedTextureHeight;

float width = bgl_RenderedTextureWidth;
float height = bgl_RenderedTextureHeight;

vec2 texCoord = gl_TexCoord[0].st;

//THIS NEEDS TO MATCH YOUR CAMERA SETTINGS---------------------
const float znear = 0.25; //Z-near
const float zfar = 50.0; //Z-far
const float fov = 50.0; //FoV
//-------------------------------------------------------------

float aspectratio = width/height;

const vec3 skycolor = vec3(0.5,0.7,0.8); //color if the ray fails
float threshold = 5.1; //how thick everything is (try to make it as low as possible without getting weird errors in the reflection)
const float reflectance = 0.04; //a.k.a. reflectance at incidence
const float stepSize = 0.001; //how small the smallest step is (in world coordinates), the smaller, the cleaner the image but the worse the performance
const int samples = 64; //more samples mean longer rays, keep this number as low as possible
const int startScale = 8; //power of two, min 1, the higher the number, the faster it is but the choppier the reflections get

float thfov = tan(fov * 0.0087266462597222);

vec3 getLinearColor(vec2 coord)
{    
    vec3 C = vec3(texture2D(bgl_RenderedTexture, coord));
    C.r = pow(C.r, 2.2);
    C.g = pow(C.g, 2.2);
    C.b = pow(C.b, 2.2);
    return C.rgb;
}

vec3 sRGBToLinear(vec3 C)
{
    C.r = pow(C.r, 2.2);
    C.g = pow(C.g, 2.2);
    C.b = pow(C.b, 2.2);
    return C.rgb;
}

vec3 linearTosRGB(vec3 C)
{
    C.r = pow(C.r, 0.45454545);
    C.g = pow(C.g, 0.45454545);
    C.b = pow(C.b, 0.45454545);
    return C.rgb;
}

float getLinearDepth(vec2 coord)
{
    float zdepth = texture2D(bgl_DepthTexture,coord).x;
    return zfar*znear / (zfar + zdepth * (znear - zfar));
}

vec3 getViewVector(vec2 coord)
{
    vec2 ndc = (coord * 2.0 - 1.0);
    return vec3(ndc.x*thfov, ndc.y*thfov/aspectratio, 1.0);
}

vec3 getViewPosition(vec2 coord)
{
    return getViewVector(coord) * getLinearDepth(coord);
}

vec3 getViewNormal(vec2 coord)
{
    float pW = 1.0/width;
    float pH = 1.0/height;
    
    vec3 p1 = getViewPosition(coord+vec2(pW,0.0)).xyz;
    vec3 p2 = getViewPosition(coord+vec2(0.0,pH)).xyz;
    vec3 p3 = getViewPosition(coord+vec2(-pW,0.0)).xyz;
    vec3 p4 = getViewPosition(coord+vec2(0.0,-pH)).xyz;

    vec3 vP = getViewPosition(coord);
    
    vec3 dx = vP-p1;
    vec3 dy = p2-vP;
    vec3 dx2 = p3-vP;
    vec3 dy2 = vP-p4;
    
    if(length(dx2)<length(dx)&&coord.x-pW>=0||coord.x+pW>1) {
    dx = dx2;
    }
    if(length(dy2)<length(dy)&&coord.y-pH>=0||coord.y+pH>1) {
    dy = dy2;
    }
    
    return normalize(cross( dx , dy ));
}

float rand(vec2 co)
{
   return fract(sin(dot(co.xy,vec2(12.9898,78.233))) * 43758.5453);
}

mat4 getViewProjectionMatrix()
{
    mat4 result;
    
    float frustumDepth = zfar - znear;
    float oneOverDepth = 1 / frustumDepth;

    result[0][0] = 1 / thfov;
    result[1][1] = aspectratio * result[0][0];
    result[2][2] = zfar * oneOverDepth;
    result[2][3] = 1;
    result[3][2] = (-zfar * znear) * oneOverDepth;
    
    return result;
}

vec2 ComputeFOVProjection(vec3 pos, mat4 VPM)
{
    vec4 offset = vec4(pos, 1.0);
    offset = VPM * offset;
    offset.xy /= offset.w;
    return offset.xy * 0.5 + 0.5;
}

vec2 raymarch(vec3 normal, vec3 vector, vec3 origin, mat4 VPM)
{
    vec3 position = origin;
    float speedUp = pow(dot(normalize(origin), normal),2)*2-1; //further distance on sharper angles
    speedUp = 1/pow(1-pow(speedUp, 2), 0.5);
    threshold *= speedUp*position.z*stepSize*float(startScale); //if not, you get banding on low angles or if it's far away, you can increase the 5, if you want the threshold to be bigger and vice versa
    for(int i = 0; i<samples; i++) {
        position += vector*speedUp*stepSize*float(startScale)*position.z;
        vec2 offset = ComputeFOVProjection(position, VPM);
        
        if(offset.x<0||offset.y<0||offset.x>1||offset.y>1||position.z<znear||position.z>zfar) { //you need it just here, because the smaller steps are before the big steps
            return vec2(2,2);
        }
        
        float sampleDepth = getLinearDepth(offset.xy);
        
        if(sampleDepth <= position.z && abs(sampleDepth - position.z)<=threshold) {
            if(startScale<=1) {
                return offset.xy;
            }
            
            int compare = int(log(float(startScale))/log(2.0));
            vec3 helpPosition = position;
            vec2 helpOffset = offset.xy;
            
            for(int j = 1; j <= compare; j++) {
                vec3 startPosition = helpPosition; //position at the beginning of the smaller loop
                vec2 startOffset = helpOffset;
                helpPosition -= vector*speedUp*stepSize*(float(startScale)/pow(2, float(j)))*helpPosition.z; //going back 1/(n^2)*startScale
                helpOffset = ComputeFOVProjection(helpPosition, VPM);
                float helpSampleDepth = getLinearDepth(helpOffset.xy);
                
                if(helpSampleDepth <= helpPosition.z && abs(helpSampleDepth - helpPosition.z)<=threshold/pow(2, float(j))) {
                    if(j == compare) {
                        if(helpOffset.x>0 && helpOffset.y>0 && helpOffset.x<1 && helpOffset.y<1) {
                            return helpOffset.xy;
                        }
                        else {  //if the coordinates are outside of the screen
                            return vec2(2,2);
                        }
                    }
                }
                else {
                    if(j == compare) {    //if the first sample was the only with collision
                        if(startOffset.x>0 && startOffset.y>0 && startOffset.x<1 && startOffset.y<1) {
                            return startOffset.xy;
                        //return offset.xy;
                        }
                        else {    //if the coordinates are outside of the screen
                            return vec2(2,2);
                        }
                    }
                    helpPosition = startPosition; //no collision => go back to the last collision point
                    helpOffset = startOffset;
                }
            }
        }
        else {
            if(i == samples-1) { //no collision with anything
                return vec2(2,2);
            }
        }
    }
}

void main()
{
    vec3 normal = getViewNormal(texCoord);
    vec3 origin = getViewPosition(texCoord);
    vec3 reflection = normalize(reflect(normalize(origin), normal));
    mat4 VPM = getViewProjectionMatrix();
    
    vec2 coord = raymarch(normal, reflection, origin, VPM);
    vec3 Color = sRGBToLinear(skycolor);
    
    if(coord!=vec2(2,2)) {
    Color = getLinearColor(coord);
    }
    

    float Fresnel = reflectance + (1-reflectance)*pow(1-dot(-normal, normalize(origin)), 5);
    gl_FragColor = vec4(linearTosRGB(mix(getLinearColor(texCoord), Color, Fresnel)), 1);
}

EDIT: I discovered a bug: you get a small stripe of no reflection at a certain distance (like a sphere with the center at the camera and a radius of about 2, if it collides you get the stripe)
EDIT2: I found out why and I fixed it

1 Like

And I’ve made a mod of the script to support “roughness” (it’s very sketchy and not really accurate, but it does what it should do, at least for low roughness values)
note: you need a float property called roughness in the logic brick…thing.

uniform sampler2D bgl_DepthTexture;
uniform sampler2D bgl_RenderedTexture;

uniform float bgl_RenderedTextureWidth;
uniform float bgl_RenderedTextureHeight;

uniform float roughness;//if set to one you get a black circle...

float width = bgl_RenderedTextureWidth;
float height = bgl_RenderedTextureHeight;

vec2 texCoord = gl_TexCoord[0].st;

//THIS NEEDS TO MATCH YOUR CAMERA SETTINGS---------------------
const float znear = 0.25; //Z-near
const float zfar = 50.0; //Z-far
const float fov = 90.0; //FoV
//-------------------------------------------------------------

float aspectratio = width/height;

const vec3 skycolor = vec3(0.5,0.7,0.8); //color if the ray fails
float threshold = 5.0; //how thick everything is (try to make it as low  as possible without getting weird errors in the reflection)
const float reflectance = 0.04; //a.k.a. reflectance at incidence
const float stepSize = 0.04; //how small the smallest step is (in world  coordinates), the smaller, the cleaner the image but the worse the  performance
const int samples = 8; //more samples mean longer rays, keep this number as low as possible
const int startScale = 4; //power of two, min 1, the higher the number, the faster it is but the choppier the reflections get
const int roughnessSamples = 4;



float counter = 0.123456789;

float thfov = tan(fov * 0.0087266462597222);

vec3 getLinearColor(vec2 coord)
{    
    vec3 C = vec3(texture2D(bgl_RenderedTexture, coord));
    C.r = pow(C.r, 2.2);
    C.g = pow(C.g, 2.2);
    C.b = pow(C.b, 2.2);
    return C.rgb;
}

vec3 sRGBToLinear(vec3 C)
{
    C.r = pow(C.r, 2.2);
    C.g = pow(C.g, 2.2);
    C.b = pow(C.b, 2.2);
    return C.rgb;
}

vec3 linearTosRGB(vec3 C)
{
    C.r = pow(C.r, 0.45454545);
    C.g = pow(C.g, 0.45454545);
    C.b = pow(C.b, 0.45454545);
    return C.rgb;
}

float getLinearDepth(vec2 coord)
{
    float zdepth = texture2D(bgl_DepthTexture,coord).x;
    return zfar*znear / (zfar + zdepth * (znear - zfar));
}

vec3 getViewVector(vec2 coord)
{
    vec2 ndc = (coord * 2.0 - 1.0);
    return vec3(ndc.x*thfov, ndc.y*thfov/aspectratio, 1.0);
}

vec3 getViewPosition(vec2 coord)
{
    return getViewVector(coord) * getLinearDepth(coord);
}

vec3 getViewNormal(vec2 coord)
{
    float pW = 1.0/width;
    float pH = 1.0/height;
    
    vec3 p1 = getViewPosition(coord+vec2(pW,0.0)).xyz;
    vec3 p2 = getViewPosition(coord+vec2(0.0,pH)).xyz;
    vec3 p3 = getViewPosition(coord+vec2(-pW,0.0)).xyz;
    vec3 p4 = getViewPosition(coord+vec2(0.0,-pH)).xyz;

    vec3 vP = getViewPosition(coord);
    
    vec3 dx = vP-p1;
    vec3 dy = p2-vP;
    vec3 dx2 = p3-vP;
    vec3 dy2 = vP-p4;
    
    if(length(dx2)<length(dx)&&coord.x-pW>=0||coord.x+pW>1) {
    dx = dx2;
    }
    if(length(dy2)<length(dy)&&coord.y-pH>=0||coord.y+pH>1) {
    dy = dy2;
    }
    
    return normalize(cross( dx , dy ));
}

float rand(vec2 co)
{
   return fract(sin(dot(co.xy,vec2(12.9898,78.233))) * 43758.5453);
}

mat4 getViewProjectionMatrix()
{
    mat4 result;
    
    float frustumDepth = zfar - znear;
    float oneOverDepth = 1 / frustumDepth;

    result[0][0] = 1 / thfov;
    result[1][1] = aspectratio * result[0][0];
    result[2][2] = zfar * oneOverDepth;
    result[2][3] = 1;
    result[3][2] = (-zfar * znear) * oneOverDepth;
    
    return result;
}

vec2 ComputeFOVProjection(vec3 pos, mat4 VPM)
{
    vec4 offset = vec4(pos, 1.0);
    offset = VPM * offset;
    offset.xy /= offset.w;
    return offset.xy * 0.5 + 0.5;
}

vec2 raymarch(vec3 normal, vec3 vector, vec3 origin, mat4 VPM)
{
    float speedUp = pow(dot(normalize(origin), normal),2)*2-1; //further distance on sharper angles
    speedUp = 1/pow(1-pow(speedUp, 2), 0.5);
    vec3 position =  origin-vector*(rand(texCoord*counter)-0.25)*speedUp*stepSize*float(startScale)*origin.z;  //the -0.25 is for preventing errors
    counter += 1.324;
    float threshy =  threshold*speedUp*position.z*stepSize*float(startScale); //if not, you  get banding on low angles or if it's far away, you can increase the 5,  if you want the threshold to be bigger and vice versa
    for(int i = 0; i<samples; i++) {
        position += vector*speedUp*stepSize*float(startScale)*position.z;
        vec2 offset = ComputeFOVProjection(position, VPM);
        
         if(offset.x<0||offset.y<0||offset.x>1||offset.y>1||position.z<znear||position.z>zfar)  { //you need it just here, because the smaller steps are before the big  steps
            return vec2(2,2);
        }
        
        float sampleDepth = getLinearDepth(offset.xy);
        
        if(sampleDepth <= position.z && abs(sampleDepth - position.z)<=threshy) {
            if(startScale<=1) {
                return offset.xy;
            }
            
            int compare = int(log(float(startScale))/log(2.0));
            vec3 helpPosition = position;
            vec2 helpOffset = offset.xy;
            
            for(int j = 1; j <= compare; j++) {
                vec3 startPosition = helpPosition; //position at the beginning of the smaller loop
                vec2 startOffset = helpOffset;
                helpPosition -=  vector*speedUp*stepSize*(float(startScale)/pow(2,  float(j)))*helpPosition.z; //going back 1/(n^2)*startScale
                helpOffset = ComputeFOVProjection(helpPosition, VPM);
                float helpSampleDepth = getLinearDepth(helpOffset.xy);
                
                if(helpSampleDepth <= helpPosition.z &&  abs(helpSampleDepth - helpPosition.z)<=threshy/pow(2, float(j))) {
                    if(j == compare) {
                        if(helpOffset.x>0 &&  helpOffset.y>0 && helpOffset.x<1 &&  helpOffset.y<1) {
                            return helpOffset.xy;
                        }
                        else {  //if the coordinates are outside of the screen
                            return vec2(2,2);
                        }
                    }
                }
                else {
                    if(j == compare) {    //if the first sample was the only with collision
                        if(startOffset.x>0 &&  startOffset.y>0 && startOffset.x<1 &&  startOffset.y<1) {
                            return startOffset.xy;
                        //return offset.xy;
                        }
                        else {    //if the coordinates are outside of the screen
                            return vec2(2,2);
                        }
                    }
                    helpPosition = startPosition; //no collision => go back to the last collision point
                    helpOffset = startOffset;
                }
            }
        }
        else {
            if(i == samples-1) { //no collision with anything
                return vec2(2,2);
            }
        }
    }
}

void main()
{
    vec3 origin = getViewPosition(texCoord);
    vec3 normal = getViewNormal(texCoord);
    vec3 Color;
    float divideBy;
    if(origin.z<zfar)
    {
//    vec3 reflection = normalize(reflect(normalize(origin), normal));
    mat4 VPM = getViewProjectionMatrix();
    
    
    float randomCounter = 0.245796789;
    
    vec3 rvec =  vec3(0,0,1);
    vec3 tangent = normalize(rvec - normal * dot(rvec, normal));
    vec3 bitangent = cross(normal, tangent);
    mat3 tbn = mat3(tangent, bitangent, normal);
    
    for(int s = 0; s<roughnessSamples; s++)
    {
        float r1 = pow(roughness,2)*(rand(texCoord*randomCounter)*2-1);
        randomCounter += 0.1;
        float r2 = pow(roughness,2)*(rand(texCoord*randomCounter)*2-1);
        randomCounter += 0.1;
        vec3 reflection = normalize(reflect(normalize(origin),normalize(tbn*normalize(vec3(r1,r2,1.0)))));
        if(dot(reflection, normal)>0) {
        vec2 coord = raymarch(normal, reflection, origin, VPM);
        if(coord!=vec2(2,2)) {
        Color += getLinearColor(coord);
        }
        else {
        Color += sRGBToLinear(skycolor);
        }
        divideBy += 1.0;
        }
    }
    Color /= divideBy;
    }
    if(divideBy==0) {
    Color = sRGBToLinear(skycolor);
    }
    float Fresnel = reflectance + (1-reflectance)*pow(1-dot(mix(-normal, vec3(0,0,1), pow(roughness,2)), normalize(origin)), 5);
    gl_FragColor = vec4(linearTosRGB(mix(getLinearColor(texCoord), Color, Fresnel)), 1);
//    gl_FragColor = vec4(linearTosRGB(vec3(Fresnel, Fresnel, Fresnel)), 1);
}

Woah, I’ve been away for a few days so I haven’t had the chance to play around with your shader (I didn’t know it existed either), I am on a bus right now so I will plug it into my blender when i get home.

How do I make not all objects are reflecting? For example, only objects with a certain property will reflect?

This filter is truely amazing! It almost works on every scene I created :smiley: Superb work!

@TheLumcoin

I have have finally been able to get some time on my hands to take a look at your shader and, to be honest, I am thoroughly impressed. It looks and runs so much better than my filter, it has more features, proper colorspace conversion. An awesome piece of work all around!
I have not started digging through the code yet, but i definetly will. I’m sure there is plenty of stuff to learn in there.

The reason my shader wasn’t working is that i had messed up the fov ratio calculation (it is really crude compared to what you did), i had a calculation that was (fov / 90.0) instead of (90.0 / fov).

I think you should post your shader in the resources forum, it would be really helpful for some people! a lot more than mine could, anyways.

@Lingul

I am not sure but i think i have seen something about renderlayers in the bge somewhere on the forum once, try looking that up.

@CGSky

Not sure if that’s for me or for TheLumcoin. In any case, thank you!

I mean, maybe I can get information about the properties of objects in the shader, like ‘texture2d’? Just as far as I know, the render layers are unavailable for the game engine.

Amazing stuff!
Will this be in the next versions of BGE / UPBGE?

@SebastianMestre and TheLumcoin:
Why don’t you guys work toghether and send / share this amazing stuff with the UPBGE people?

Thanks, this is running very smoothly on my machine, keep up the sorcery guys

How do i use this script in what sense and which instances?

Fred/K.S

How can I texture map this Reflection effect to Specific materials?

I made a slightly cleaner (really, just very few changes and it still is not perfect) version of the SSR with roughness:

uniform sampler2D bgl_DepthTexture;
uniform sampler2D bgl_RenderedTexture;

uniform float bgl_RenderedTextureWidth;
uniform float bgl_RenderedTextureHeight;

uniform float roughness;//if set to one you get a black circle...

float width = bgl_RenderedTextureWidth;
float height = bgl_RenderedTextureHeight;

vec2 texCoord = gl_TexCoord[0].st;

//THIS NEEDS TO MATCH YOUR CAMERA SETTINGS---------------------
const float znear = 0.25; //Z-near
const float zfar = 50.0; //Z-far
const float fov = 90.0; //FoV
//-------------------------------------------------------------

float aspectratio = width/height;

const vec3 skycolor = vec3(0.5,0.7,0.8); //color if the ray fails
float threshold = 5.0; //how thick everything is (try to make it as low  as possible without getting weird errors in the reflection)
const float reflectance = 0.04; //a.k.a. reflectance at incidence
const float stepSize = 0.04; //how small the smallest step is (in world  coordinates), the smaller, the cleaner the image but the worse the  performance
const int samples = 8; //more samples mean longer rays, keep this number as low as possible
const int startScale = 4; //power of two, min 1, the higher the number, the faster it is but the choppier the reflections get
const int roughnessSamples = 4;



float counter = 0.123456789;

float thfov = tan(fov * 0.0087266462597222);

vec3 getLinearColor(vec2 coord)
{    
    vec3 C = vec3(texture2D(bgl_RenderedTexture, coord));
    C.r = pow(C.r, 2.2);
    C.g = pow(C.g, 2.2);
    C.b = pow(C.b, 2.2);
    return C.rgb;
}

vec3 sRGBToLinear(vec3 C)
{
    C.r = pow(C.r, 2.2);
    C.g = pow(C.g, 2.2);
    C.b = pow(C.b, 2.2);
    return C.rgb;
}

vec3 linearTosRGB(vec3 C)
{
    C.r = pow(C.r, 0.45454545);
    C.g = pow(C.g, 0.45454545);
    C.b = pow(C.b, 0.45454545);
    return C.rgb;
}

float getLinearDepth(vec2 coord)
{
    float zdepth = texture2D(bgl_DepthTexture,coord).x;
    return zfar*znear / (zfar + zdepth * (znear - zfar));
}

vec3 getViewVector(vec2 coord)
{
    vec2 ndc = (coord * 2.0 - 1.0);
    return vec3(ndc.x*thfov, ndc.y*thfov/aspectratio, 1.0);
}

vec3 getViewPosition(vec2 coord)
{
    return getViewVector(coord) * getLinearDepth(coord);
}

vec3 getViewNormal(vec2 coord)
{
    float pW = 1.0/width;
    float pH = 1.0/height;
    
    vec3 p1 = getViewPosition(coord+vec2(pW,0.0)).xyz;
    vec3 p2 = getViewPosition(coord+vec2(0.0,pH)).xyz;
    vec3 p3 = getViewPosition(coord+vec2(-pW,0.0)).xyz;
    vec3 p4 = getViewPosition(coord+vec2(0.0,-pH)).xyz;

    vec3 vP = getViewPosition(coord);
    
    vec3 dx = vP-p1;
    vec3 dy = p2-vP;
    vec3 dx2 = p3-vP;
    vec3 dy2 = vP-p4;
    
    if(length(dx2)<length(dx)&&coord.x-pW>=0||coord.x+pW>1) {
    dx = dx2;
    }
    if(length(dy2)<length(dy)&&coord.y-pH>=0||coord.y+pH>1) {
    dy = dy2;
    }
    
    return normalize(cross( dx , dy ));
}

float rand(vec2 co)
{
   return fract(sin(dot(co.xy,vec2(12.9898,78.233))) * 43758.5453);
}

mat4 getViewProjectionMatrix()
{
    mat4 result;
    
    float frustumDepth = zfar - znear;
    float oneOverDepth = 1 / frustumDepth;

    result[0][0] = 1 / thfov;
    result[1][1] = aspectratio * result[0][0];
    result[2][2] = zfar * oneOverDepth;
    result[2][3] = 1;
    result[3][2] = (-zfar * znear) * oneOverDepth;
    
    return result;
}

vec2 ComputeFOVProjection(vec3 pos, mat4 VPM)
{
    vec4 offset = vec4(pos, 1.0);
    offset = VPM * offset;
    offset.xy /= offset.w;
    return offset.xy * 0.5 + 0.5;
}

vec2 raymarch(vec3 vector, vec3 origin, mat4 VPM)
{
    float speedUp; //further distance on sharper angles
    vec3 position =  origin;
    counter += 1.324;
    float tempThreshold; //if not, you  get banding on low angles or if it's far away, you can increase the 5,  if you want the threshold to be bigger and vice versa
    for(int i = 0; i<samples; i++) {
        speedUp = 1/pow(1-pow(dot(normalize(position), vector), 2), 0.5);
        position += vector*speedUp*stepSize*float(startScale)*position.z;
        vec2 offset = ComputeFOVProjection(position, VPM);
        
         if(offset.x<0||offset.y<0||offset.x>1||offset.y>1||position.z<znear||position.z>zfar)  { //you need it just here, because the smaller steps are before the big  steps
            return vec2(2,2);
        }
        
        float sampleDepth = getLinearDepth(offset.xy);
        tempThreshold =  threshold*speedUp*position.z*stepSize*float(startScale);
        
        if(sampleDepth <= position.z && abs(sampleDepth - position.z)<=tempThreshold) {
            if(startScale<=1) {
                return offset.xy;
            }
            
            int compare = int(log(float(startScale))/log(2.0));
            vec3 helpPosition = position;
            vec2 helpOffset = offset.xy;
            
            for(int j = 1; j <= compare; j++) {
                vec3 startPosition = helpPosition; //position at the beginning of the smaller loop
                vec2 startOffset = helpOffset;
                speedUp = 1/pow(1-pow(dot(normalize(startPosition), vector), 2), 0.5);
                helpPosition -=  vector*speedUp*stepSize*(float(startScale)/pow(2,  float(j)))*helpPosition.z; //going back 1/(n^2)*startScale
                helpOffset = ComputeFOVProjection(helpPosition, VPM);
                float helpSampleDepth = getLinearDepth(helpOffset.xy);
                tempThreshold =  threshold*speedUp*helpPosition.z*stepSize*(float(startScale)/pow(2,  float(j)));
                
                if(helpSampleDepth <= helpPosition.z &&  helpPosition.z-helpSampleDepth <= tempThreshold) {
                    if(j == compare) {
                        if(helpOffset.x>0 &&  helpOffset.y>0 && helpOffset.x<1 &&  helpOffset.y<1) {
                            return helpOffset.xy;
                        }
                        else {  //if the coordinates are outside of the screen
                            return vec2(2,2);
                        }
                    }
                }
                else {
                    if(j == compare) {    //if the first sample was the only with collision
                        if(startOffset.x>0 &&  startOffset.y>0 && startOffset.x<1 &&  startOffset.y<1) {
                            return startOffset.xy;
                        //return offset.xy;
                        }
                        else {    //if the coordinates are outside of the screen
                            return vec2(2,2);
                        }
                    }
                    helpPosition = startPosition; //no collision => go back to the last collision point
                    helpOffset = startOffset;
                }
            }
        }
    }
    return vec2(2,2);
}

void main()
{
    vec3 origin = getViewPosition(texCoord);
    vec3 normal = getViewNormal(texCoord);
    vec3 Color;
    float divideBy;
    if(origin.z<zfar)
    {
    mat4 VPM = getViewProjectionMatrix();
    
    float randomCounter = 0.245796789;
    
    vec3 rvec =  vec3(0,0,1);
    vec3 tangent = normalize(rvec - normal * dot(rvec, normal));
    vec3 bitangent = cross(normal, tangent);
    mat3 tbn = mat3(tangent, bitangent, normal);
    
    for(int s = 0; s<roughnessSamples; s++)
    {
        float r1 = pow(roughness,2)*(rand(texCoord*randomCounter)*2-1);
        randomCounter += 0.1;
        float r2 = pow(roughness,2)*(rand(texCoord*randomCounter)*2-1);
        randomCounter += 0.1;
        vec3 reflection = normalize(reflect(normalize(origin),normalize(tbn*normalize(vec3(r1,r2,1.0)))));
        if(dot(reflection, normal)>0) {
        float speedUp = 1/pow(1-pow(dot(normalize(origin), reflection), 2), 0.5); //further distance on sharper angles
        vec3 tempOrigin = origin-reflection*(rand(texCoord*randomCounter)-0.25)*speedUp*stepSize*float(startScale)*origin.z;  //the -0.25 is for preventing errors
        randomCounter += 0.1;
        vec2 coord = raymarch(reflection, tempOrigin, VPM);
        if(coord!=vec2(2,2)) {
        Color += getLinearColor(coord);
        }
        else {
        Color += sRGBToLinear(skycolor);
        }
        divideBy += 1.0;
        }
    }
    Color /= divideBy;
    }
    if(divideBy==0) {
    Color = sRGBToLinear(skycolor);
    }
    float Fresnel = reflectance + (1-reflectance)*pow(1-dot(mix(-normal, vec3(0,0,1), pow(roughness,2)), normalize(origin)), 5);
    gl_FragColor = vec4(linearTosRGB(mix(getLinearColor(texCoord), Color, Fresnel)), 1);
//    gl_FragColor = vec4(linearTosRGB(vec3(Fresnel, Fresnel, Fresnel)), 1);
}

The main changes in this version are that the raytrace-method uses not the normal as input anymore and that the “tempThreshold” and the “speedUp” properties are recalculated every step and not just once at the start (this is rather pointless unless you use a large FoV but I wanted it to be more accurate and the decrease in performance is very, very little).

I also made a modification of the roughness modification of the SSR script, which calculates the lighting from a sunlamp in imagespace. (lambertian diffuse and soft shadows, you need an “external” float property roughness for controlling the “softness” ^.^) This is just a proof of concept but it’s nevertheless worth sharing, imho ;):

uniform sampler2D bgl_DepthTexture;
uniform sampler2D bgl_RenderedTexture;

uniform float bgl_RenderedTextureWidth;
uniform float bgl_RenderedTextureHeight;

uniform float roughness;//if set to one you get a black circle...

float width = bgl_RenderedTextureWidth;
float height = bgl_RenderedTextureHeight;

vec2 texCoord = gl_TexCoord[0].st;

//THIS NEEDS TO MATCH YOUR CAMERA SETTINGS---------------------
const float znear = 0.25; //Z-near
const float zfar = 50.0; //Z-far
const float fov = 90.0; //FoV
//-------------------------------------------------------------

float aspectratio = width/height;

const vec3 light = vec3(2.0,4.0,-3.0); //direction of the Light

float threshold = 5.0; //how thick everything is (try to make it as low   as possible without getting weird errors in the reflection)
const float reflectance = 0.04; //a.k.a. reflectance at incidence
const float stepSize = 0.01; //how small the smallest step is (in world   coordinates), the smaller, the cleaner the image but the worse the   performance
const int samples = 64; //more samples mean longer rays, keep this number as low as possible
const int startScale = 1; //power of two, min 1, the higher the number, the faster it is but the choppier the reflections get
const int roughnessSamples = 2;



float counter = 0.123456789;

float thfov = tan(fov * 0.0087266462597222);

vec3 getLinearColor(vec2 coord)
{    
    vec3 C = vec3(texture2D(bgl_RenderedTexture, coord));
    C.r = pow(C.r, 2.2);
    C.g = pow(C.g, 2.2);
    C.b = pow(C.b, 2.2);
    return C.rgb;
}

vec3 sRGBToLinear(vec3 C)
{
    C.r = pow(C.r, 2.2);
    C.g = pow(C.g, 2.2);
    C.b = pow(C.b, 2.2);
    return C.rgb;
}

vec3 linearTosRGB(vec3 C)
{
    C.r = pow(C.r, 0.45454545);
    C.g = pow(C.g, 0.45454545);
    C.b = pow(C.b, 0.45454545);
    return C.rgb;
}

float getLinearDepth(vec2 coord)
{
    float zdepth = texture2D(bgl_DepthTexture,coord).x;
    return zfar*znear / (zfar + zdepth * (znear - zfar));
}

vec3 getViewVector(vec2 coord)
{
    vec2 ndc = (coord * 2.0 - 1.0);
    return vec3(ndc.x*thfov, ndc.y*thfov/aspectratio, 1.0);
}

vec3 getViewPosition(vec2 coord)
{
    return getViewVector(coord) * getLinearDepth(coord);
}

vec3 getViewNormal(vec2 coord)
{
    float pW = 1.0/width;
    float pH = 1.0/height;
    
    vec3 p1 = getViewPosition(coord+vec2(pW,0.0)).xyz;
    vec3 p2 = getViewPosition(coord+vec2(0.0,pH)).xyz;
    vec3 p3 = getViewPosition(coord+vec2(-pW,0.0)).xyz;
    vec3 p4 = getViewPosition(coord+vec2(0.0,-pH)).xyz;

    vec3 vP = getViewPosition(coord);
    
    vec3 dx = vP-p1;
    vec3 dy = p2-vP;
    vec3 dx2 = p3-vP;
    vec3 dy2 = vP-p4;
    
    if(length(dx2)<length(dx)&&coord.x-pW>=0||coord.x+pW>1) {
    dx = dx2;
    }
    if(length(dy2)<length(dy)&&coord.y-pH>=0||coord.y+pH>1) {
    dy = dy2;
    }
    
    return normalize(cross( dx , dy ));
}

float rand(vec2 co)
{
   return fract(sin(dot(co.xy,vec2(12.9898,78.233))) * 43758.5453);
}

mat4 getViewProjectionMatrix()
{
    mat4 result;
    
    float frustumDepth = zfar - znear;
    float oneOverDepth = 1 / frustumDepth;

    result[0][0] = 1 / thfov;
    result[1][1] = aspectratio * result[0][0];
    result[2][2] = zfar * oneOverDepth;
    result[2][3] = 1;
    result[3][2] = (-zfar * znear) * oneOverDepth;
    
    return result;
}

vec2 ComputeFOVProjection(vec3 pos, mat4 VPM)
{
    vec4 offset = vec4(pos, 1.0);
    offset = VPM * offset;
    offset.xy /= offset.w;
    return offset.xy * 0.5 + 0.5;
}

vec2 raymarch(vec3 vector, vec3 origin, mat4 VPM)
{
    float speedUp; //further distance on sharper angles
    vec3 position =  origin;
    counter += 1.324;
    float tempThreshold; //if not, you  get banding on low angles or if  it's far away, you can increase the 5,  if you want the threshold to be  bigger and vice versa
    for(int i = 0; i<samples; i++) {
        speedUp = 1/pow(1-pow(dot(normalize(position), vector), 2), 0.5);
        position += vector*speedUp*stepSize*float(startScale)*position.z;
        vec2 offset = ComputeFOVProjection(position, VPM);
        
          if(offset.x<0||offset.y<0||offset.x>1||offset.y>1||position.z<znear||position.z>zfar)   { //you need it just here, because the smaller steps are before the  big  steps
            return vec2(2,2);
        }
        
        float sampleDepth = getLinearDepth(offset.xy);
        tempThreshold =  threshold*speedUp*position.z*stepSize*float(startScale);
        
        if(sampleDepth <= position.z && abs(sampleDepth - position.z)<=tempThreshold) {
            if(startScale<=1) {
                return offset.xy;
            }
            
            int compare = int(log(float(startScale))/log(2.0));
            vec3 helpPosition = position;
            vec2 helpOffset = offset.xy;
            
            for(int j = 1; j <= compare; j++) {
                vec3 startPosition = helpPosition; //position at the beginning of the smaller loop
                vec2 startOffset = helpOffset;
                speedUp = 1/pow(1-pow(dot(normalize(startPosition), vector), 2), 0.5);
                helpPosition -=   vector*speedUp*stepSize*(float(startScale)/pow(2,   float(j)))*helpPosition.z; //going back 1/(n^2)*startScale
                helpOffset = ComputeFOVProjection(helpPosition, VPM);
                float helpSampleDepth = getLinearDepth(helpOffset.xy);
                tempThreshold =  threshold*speedUp*helpPosition.z*stepSize*(float(startScale)/pow(2,  float(j)));
                
                if(helpSampleDepth <= helpPosition.z &&  helpPosition.z-helpSampleDepth <= tempThreshold) {
                    if(j == compare) {
                        if(helpOffset.x>0 &&   helpOffset.y>0 && helpOffset.x<1 &&   helpOffset.y<1) {
                            return helpOffset.xy;
                        }
                        else {  //if the coordinates are outside of the screen
                            return vec2(2,2);
                        }
                    }
                }
                else {
                    if(j == compare) {    //if the first sample was the only with collision
                        if(startOffset.x>0 &&   startOffset.y>0 && startOffset.x<1 &&   startOffset.y<1) {
                            return startOffset.xy;
                        //return offset.xy;
                        }
                        else {    //if the coordinates are outside of the screen
                            return vec2(2,2);
                        }
                    }
                    helpPosition = startPosition; //no collision => go back to the last collision point
                    helpOffset = startOffset;
                }
            }
        }
    }
    return vec2(2,2);
}

void main()
{
    vec3 origin = getViewPosition(texCoord);
    vec3 normal = getViewNormal(texCoord);
    vec3 Color;
    
    vec3 L = normalize(light);
    
    float divideBy;
    mat4 VPM = getViewProjectionMatrix();
    
    float randomCounter = 0.245796789;
    
    vec3 rvec =  vec3(0,0,1);
    vec3 tangent = normalize(rvec - L * dot(rvec, L));
    vec3 bitangent = cross(L, tangent);
    mat3 tbn = mat3(tangent, bitangent, L);
    
    for(int s = 0; s<roughnessSamples; s++)
    {
        float r1 = pow(roughness,2)*(rand(texCoord*randomCounter)*2-1);
        randomCounter += 0.1;
        float r2 = pow(roughness,2)*(rand(texCoord*randomCounter)*2-1);
        randomCounter += 0.1;
        vec3 reflection = normalize(tbn*normalize(vec3(r1,r2,1.0)));
        float speedUp = 1/pow(1-pow(dot(normalize(origin), reflection), 2), 0.5); //further distance on sharper angles
        vec3 tempOrigin =  origin-reflection*(rand(texCoord*randomCounter)-0.25)*speedUp*stepSize*float(startScale)*origin.z;   //the -0.25 is for preventing errors
        //tempOrigin = origin;
        randomCounter += 0.1;
        vec2 coord = raymarch(reflection, tempOrigin, VPM);
        if(coord!=vec2(2,2)) {
        Color += vec3(0,0,0);
        }
        else {
        Color += vec3(1,1,1);
        }
        divideBy += 1.0;
    }
    Color /= divideBy;
    if(divideBy==0) {
    Color = vec3(1,1,1);
    }
    vec3 lighting = getLinearColor(texCoord)*(dot(normal, L));
    gl_FragColor = vec4(linearTosRGB(mix(vec3(0,0,0), lighting, Color)), 1);
//    gl_FragColor = vec4(linearTosRGB(vec3(Fresnel, Fresnel, Fresnel)), 1);
}

If you wonder why I always write several posts immedeately after each other: The code is too long, so I have to split it up in two replies.
@SebastianMestre:
Thanks :’) Posting my shader in the resources forum? Not yet, because I’m not perfectly satisfied with the shader, e.g. I guess that calculating the FoV projection via multiplying with the ViewProjectionMatrix is not even close to the speed of your/Martins Upitis’s method(my version does unnecessary math), so I’ll probably replace that. Also the roughness version definitely does look good but it’s performance is very bad… Maybe I’ll find a way to speed it up.
@jovlem:
Nope, this (probably? :wink: ) won’t be in any version of the BGE/UPBGE/whatever because for that it should be possible to apply it per-material with custom inputs and that’s not possible, at least not in a way that is 1.performant and 2.clean.
We don’t work together…because xD
It was for me entertaining to write the shader and that’s why i wrote it and I guess it’s the same for SebastianMestre.

don’t think the UPBGE developers don’t read your work :smiley:

they probably do :smiley: