How does the "Reflection" texture work in the BGE?

Reason I’m posting in the Game Engine thread: Even though it’s texture related, I’m intending to implement this feature into my “Game” engine, based on what’s in Blender.

CALLING ALL OPENGL / GLSL GURUS (Coders) :stuck_out_tongue:

Hey guys,

I’m trying to get my head around how the “reflection” texture works in Blender, and how to calculate the same results in GLSL/OGL.

SO I first looked up “Sphere Mapping”, because “Reflection Mapping” seems to refer to Cube Maps. Blender can use 1 Sphere texture without taking any light source into consideration, and display that sphere map smoothly along a surface - somehow based on nothing but camera position (I think?). That’s what I want to do, in GLSL.

I followed:
http://www.ozone3d.net/tutorials/glsl_texturing_p04.php (mid page, Specular Highlights with Sphere Map)

The results:

Artifacts on side: https://www.youtube.com/watch?v=hx66_xWhVu4

Correct on the side but now artifact has moved to top: https://www.youtube.com/watch?v=MncCOyGuaOs

I’m doing everything in the vertex shader:

Please, if anyone could help by looking into the calculations? You might be able to catch what I’m not seeing as it really isn’t even a big calculation…

Thank you in advance guys

It was just pointed out on stackexchange that you need to perform the operations on a pixel basis rather than the vertex level since the interpolation is not linear. Since you are doing that for your game engine, you need find a good balance.
The denser the mesh gets, the more the error is hidden. Depending on the use case, this can work sufficiently well. If you are aiming for high end devices with a planned usage of tesselation, this approach alone might be just good enough. If tesselation is not an option, it becomes more complicated of course, however, it might still be usable, but the context will be a lot more relevant.

Don’t do it in the vertex shader, use the fragment shader for this.

This is what Blender does (taken from the source code):


void texco_refl(vec3 vn, vec3 view, out vec3 ref)
{
    ref = view - 2.0 * dot(vn, view) * vn;
}

vn = vertex normal
view = view position (camera)
ref = resulting texture coordinate

You can also get a similar result by using a normalized normal vector as texture coordinate.

Awesomeness! Thanks for the comments guys, I’ll be looking into this more extensively tonight then, I have tried moving the calculation to the fragment shader but it gave me literally the exact same result… makes me feel like I messed something up at the time :confused:

So I’ve experimented a bit, submitting different values, still no luck, so here’s what I’m doing, stripped to the relevant bits. It literally is just a few lines of code in the shaders, so if the mistake is there, it’s gotta be down here, below:

Vertex Shader:

vec4 worldPosition = FTM * vec4(vertexPosition_modelspace, 1.0);
vec3 surfaceNormal = (FTM * vec4(vertexNormal_modelspace, 0.0)).xyz;

// Send values to FragShader (Both a Vec3)
VN = surfaceNormal;
VIEW = (V * worldPosition).xyz;

Fragment Shader:

// Normalize values taken from VertexShader:
vec3 vn = normalize(VN);
vec3 view = normalize(VIEW);

vec3 ref_txcoord = view - 2.0 * dot(vn, view) * vn;

// Have to divide by 4, otherwise tiling seems to be the wrong scale (texture displaying too small, repeating too many times)
out_color = texture(reflectionSampler, ref_txcoord/4.0);

Any ideas?

I must divide the tex-coord by 4 to get a “roughly similar” look to what I get in the Blender window. Not going to lie it’s still very bad, but if I don’t, then the texture appears extremely small, repeating a million times. Circles everywhere.

I also tried submitting the camera world-position (vec3 x y z) into the fragShader, normalize it and use it in the equasion but that gave very different results depending on where my camera was in the world, the whole object getting darker as I move away from it. I’m not gonna lie but It’s still very “foggy” what’s going on.
Are you supposed to put in the cameraRelativeToObject position instead of worldPosition or…?

Nevertheless, the weird artifacts on the sides seem to have gone. Though it still only just looks “kinda similar” not quite the same…

After 3 days of being stuck on this… aarrgggghhh. Not gonna lie, I didn’t think it would be this difficult :smiley:

Why don’t you just use a normalized normal vector as texture coordinate?

Hey, the normalised normal vector seems OK when modyfing the fullTransformMatrix but not when the camera moves, so when it’s only the camera moving (not the object) then the texture seems to just stand still. Works with rotating objects though even if the camera is sill. Is that what’s supposed to happen?

The best result I’ve got so far was when I used the “blender reflection calculation”, no artifacts and it was responding to both rotations and camera position changes. it just had an insanely small tiling… it’s like, in blender it looks like the texture is not repeated at all but instead stretched over the model based on view.

For me it tiles many many times


Results of using the blender method without division

[EDIT]

Is it possible that I’m putting in the wrong data into “VIEW” ? That’s all I can think of…

View = normalize(eyePosition - worldPosition)

Sphere map:
[video]https://youtu.be/f9T_8Y98f6o[/video]

All maps together:
[video]https://youtu.be/pyXynwBDP68[/video]

After testing from various angles and rotations, seems to be fixed! even though I ended up using the original method with a bit of fiddling around the artifacts are now gone! And it looks almost the exact same result as in blender which is the most important thing… so… happy!

Thanks guys, your ideas helped!

Let me know what you think of the results!

Hej, thanks Twister for this code. I was wondering how reflection feature works some time ago. For those of you who would like to have it in node editor. Here is simple setup. The one on the left is from node editor and the one on the right is from blender material with texture set to reflection.


I’m glad I could help!
So it’s your own engine huh? Then I’m not alone :smiley:

It’s coming along to be one indeed, the majority of the data management is now there, plus bullet physics has been integrated with a basic scenario for testing :slight_smile: I just want to implement a few more things I really liked in blender so that’s once all visuals are there, I can move onto setting up the batch forward renderer, cause now it’s drawcall per renderable… not exactly the most efficient way :confused:

[EDIT]

Do you by any chance know if blender uses grouping for rendering or is it draw call per object?

Cool. Mine’s 2D though, yours looks pretty damn cool

Oh, that’s beyond my knowledge. But it’s probably draw call per object.

Thank you! It’s no UE4 (haha) but it’s getting close to the BGE, so I’m well happy :slight_smile:

Are you working with OpenGL as well? How is it going?

Also, I just looked into MatCap implementation and it looks about 90% the same as the reflection code… interesting, I’ll be looking into that next!

Where did you find the BGE source by the way? Like, the official one. On gitHub there seems to be a few different versions and if I remember well none of them were from an official Blender source?

Wouldn’t mind looking into the animation system implementation

You can find an unofficial repository here. It’s UPBGE, the improved BGE.
You can also join #upbgecoders on Freenode IRC chat to ask questions or just hang out :slight_smile:

Are you working with OpenGL as well? How is it going?

Yes, it’s working fine so far, no problems. https://github.com/DCubix/Sigma-Engine

That’s great! I’m glad it’s so far so good! Love working with OGL myself, just can’t wait to see what the Vulkan API’s gonna be about :stuck_out_tongue:

AFAIK, the Vulkan API is focused on the low-level. Nothing like OpenGL

DerpGoose, he wanted the reflection formula for his engine. Node setups won’t work :stuck_out_tongue: