Inaccurate normal map output from compositor nodes

Hello,

I’m trying to use compositing nodes to render out the normals for a set of scenes that I have, and I want to render out (something) that will allow me to determine exactly the correct normals, i.e. three layers of values for x,y, and z normals in the range of -1 to 1, with 0 = no normal component in a given direction.

Since there is no image format that will store negative values, I have been trying to use mysterious Blender Math to create an output in the range of 0-1, 0-2, or 0-255, and render that out to an image (.png or .hdr), which I will then convert back to -1 to 1 using Matlab. But I haven’t yet been able to map the normals correctly.

I tried this setup:

http://twentyfourbecks.wordpress.com/2009/08/30/153/

… and the normals seem to be skewed toward +1 in both x and y dimensions. That is, the normals that should have a value of 0 (or 128, or 1) end up having higher values. Without access to some output of the original normal data, I can’t figure out why.

Here is my test setup. The node setup from the web site is on the bottom; another version that I tried is on top (more on that below). The colors in the “multiply” and “add” nodes at the bottom are both using colors of [.5,.5,.5] (mid-gray).

http://geon.usc.edu/~mark/Images/NormalRenderTest_ScrShot.png

… and here is my evidence that the normals aren’t mapping as expected:
(The gray band - pixel intensity = 128 - should show up right at the center of the sphere, but it doesn’t!)
http://geon.usc.edu/~mark/Images/NormalRenderTest_01_nor1cMappedX.png
http://geon.usc.edu/~mark/Images/NormalRenderTest_01_nor1cMappedY.png

My alternative node setup uses “NORMALIZE” nodes, and seems to get me pretty close values to the (true?) values.

http://geon.usc.edu/~mark/Images/NormalRenderTest_01_nor2cMappedX.png
http://geon.usc.edu/~mark/Images/NormalRenderTest_01_nor2cMappedY.png

However, the “normalize” nodes make me nervous, because I’m not sure what they’re doing. What if, for example, I have a scene without any normals pointing (close to) straight left? Will the normal vectors be “normalized” properly? I doubt it. And even as I have implemented this, the normal vectors don’t map exactly from 0 to 2 (they map from .0039 to 2.0078). Why the offset?

I realize this is a bit of a fussy question, but I’m doing computations on these normals that are for a research project, so “looking about right” isn’t quite good enough.

Any ideas how I can better implement this are MUCH appreciated. Thanks!

Mark

A demo of why my second solution won’t work in all cases. The following are normal maps rendered from the same scene with the same node setup (the one with the normalize nodes from above). The only difference between the scenes is the presence of the ball (camera position is the same). Without the presence of the sphere to “calibrate” the maxima and minima for the normals, you get out very different normals out!

With Ball:
http://geon.usc.edu/~mark/Images/NormalRenderTest_01_nor2sc2cMappedX.png
No Ball:
http://geon.usc.edu/~mark/Images/NormalRenderTest_01_nor2sc3cMappedX.png

I’m not entirely sure what the normalize node does, since it has a scalar, as both input and output. It would make more sense to me if it was a vector. The documentation says it works on vectors however - http://wiki.blender.org/index.php/Doc:2.6/Manual/Composite_Nodes/Types/Vector

So I’m just wondering, since you use three normalize-nodes, one for each channel, what is the expected output?

I had to normalize some vectors a while ago, and i couldn’t get it to work until I made my own normalize node-group using math nodes.

So I guess, if you are actually trying to scale the amplitude of the normal vector to 1, you might not get what you want with that setup.

For the record, here is my normalize setup:


I’d be glad if someone could shed some light upon what the normalize node is really supposed to do.

OK, I’ve basically solved it. I have a much simpler setup (with no normalize nodes) (see below).

I used the normalize nodes in the first place because there is some weirdness with the values not scaling exactly to the range -1 to 1 (or, as I’ve manipulated them to be, 0 to 2). Some values come out to be ~ 2.02. (why? were these normals greater than 1 to begin with?) This is obviously a much smaller problem than the problems created by the normalize nodes, but I was puzzled by it.

teppic: I think the normalize nodes scales all values in the image by the minimum and maximum values of the data - so whatever the minimum was before, it is now 0, and whatever the maximum was before, it is now 1.

Working node setup:
http://geon.usc.edu/~mark/Images/NormalRenderTest_WorkingNodes.png

Showing that it’s working:
http://geon.usc.edu/~mark/Images/NormalRenderTest_01_nor3sc2cMappedXYZ.png
http://geon.usc.edu/~mark/Images/NormalRenderTest_01_nor3sc3cMappedXYZ.png

My understanding is the normalize node takes a vector and makes the unit length == 1, not sure what the actual math name for this op is though.

So… if you split out the different components and plug them into a vector socket blender will do its best to turn them into normalized vectors – which probably isn’t quite what you want.

Yes, that is what the documentation says, but to normalize a vector you need access to all of it’s components. Using pythogoras theorem you can calculate the length of the vector, and then divide each component by that length. There is no way the normalize node can do that since it only takes a single float value as input.

I guess this is a bit off topic now, but I think either the documentation or the implementation needs to be revised.

I guess it’s meant to normalize an image (since it does 0.0 < pixel < 1.0) and not normalize vectors.

Yes, that might be so. But then both the documentation and the category is misleading :slight_smile: