First, I am talking about a MatCap(Material Capture) shader implemented with nodes. Not the matcap option on the properties panel. While they do do the same sorta thing, it’s the nodes setup I’m talking about.
Now I don’t know Blender nodes, but I know a bit of GLSL and a matcap shader is so simple I figured it would be the perfect little shader to work out in nodes as a learning execise. And it works… But I don’t understand how or why.
I start with a material node. Normal output into a mapping node. Vector output into a texture node. Color output into the output node. Pretty straight forward, but I’ve attached an image if your having trouble keeping up.
Now the thing that has me confused. The proper results are only obtained by scaling the x and y by -1(or rotating 180 on z?). The problem to my understanding is that the normal components are in the range of -1 to 1, and the uv needs to be in the range of 0 and 1.
How is that node setup extracting a usable(correct) uv from the normal?
In the attached image is my OpenGL GLSL render on the right and Blender on the left. The results seem near pixel perfect, but the method of how they got there seems completely backwards.