Extracting whole world position from a depth texture?

Is there anyway to do this? because when I tried
this:
http://http.developer.nvidia.com/GPUGems3/gpugems3_ch27.html
and this:
https://www.opengl.org/discussion_boards/showthread.php/185571-Convert-to-Worldspace-from-depth-buffer
and this:


and this:
https://www.opengl.org/discussion_boards/showthread.php/179823-Help-reconstructing-pixel-position-from-depth
I got nothing!!!
those guys gets the world position, even its kinda weird, but why not me?

is it just because BGE doesn’t support so I have to create my own alternatives?
if so, can you please tell me how to do it?

My code:


vec4 pos = vec4( gl_TexCoord[0].st * 2.0 - 1.0, texture(bgl_DepthTexture, gl_TexCoord[0].st).r * 2.0 - 1.0, 1.0 );
pos = viewMatrixInverse * gl_ProjectionMatrixInverse * pos;
pos.xyz / pos.w;
gl_FragColor = pos;

For being a relatively simple thing to do, I have found reconstructing position from depth can be a difficult thing to implement. First of all, there is nothing really in Blender that should be preventing you from doing this. A couple of things to keep in mind are that using the inverse project matrix does not work well when using an orthographic camera, and Blender’s depth buffer is non-linear (as is common for depth buffers).

I have found the articles on Matt Pattineo’s blog to be helpful any time I need to do this:



this is for games like final fantasy 8 etc?

never mind, it looks like BGE 2D Filter GLSL only imports value from an existing space
Example: if you want to import mat4 to your filter 2D, then you have to separate them to 16 pieces and append pieces to the value space
like:


cameraMatrix = camera.modelview_matrix * camera.projection_matrix

own['w0'] = cameraMatrix[0][0]
own['w1'] = cameraMatrix[0][1]
own['w2'] = cameraMatrix[0][2]
own['w3'] = cameraMatrix[0][3]
own['w4'] = cameraMatrix[1][0]
own['w5'] = cameraMatrix[1][1]
own['w6'] = cameraMatrix[1][2]
own['w7'] = cameraMatrix[1][3]
own['w8'] = cameraMatrix[2][0]
own['w9'] = cameraMatrix[2][1]
own['w10'] = cameraMatrix[2][2]
own['w11'] = cameraMatrix[2][3]
own['w12'] = cameraMatrix[3][0]
own['w13'] = cameraMatrix[3][1]
own['w14'] = cameraMatrix[3][2]
own['w15'] = cameraMatrix[3][3]

and on the GLSL just combine those pieces into 1 matrix value again


uniform float w0, w1, w2, w3, w4, w5, w6, w7, w8, w9, w10, w11, w12, w13, w14, w15;
mat4 modelViewProjectionMatrix = mat4(w0,w1,w2,w3,w4,w5,w6,w7,w8,w9,w10,w11,w12,w13,w14,w15);

Got this idea from Martins :stuck_out_tongue: