Images that are used for texture normals

Hi. I am curious about images that are used to affect normals on a texture.

I’ve noticed that most of them are generally blue.

NRM

Is there any particular reason for that? I would think that black and white would be more responsive.

Thanks.

0.5,0.5,1 blue means it’s aligned with the interpolated face normal-- there is no alteration to the normal. Since most normals shouldn’t represent strong deviations, colors tend to be close to this value.

1 Like

Thanks bandages.

So, it’s some kind of standard I guess. Same for all programs, or just Blender?

I must do more research.

Uhh, kinda. 0.5,0.5,1 is a standard, yes, same for everything. There is a difference between how OGL and DX applications read normal maps (they read the green channel as meaning a vector pointing in opposite directions.)

edit: should maybe add, your pic is a “tangent space” normal map. There are also object space normal maps, which don’t have the same bias to that lilac color.

Thanks bandages, I’ve done some research, downloaded a few normal maps, and applied them to a plane just to learn the basics.

However… are you familiar with MakeHuman? I’ve created a few characters with it.

Here is a piece of one skin texture for an old lady:

old_lightskinned_female_diffuse3

In the past, using this file - not a normals file, I have set the “Normals” setting to 0.3 - 0.6 and noticed the the wrinkles and general skin texture do become a little more believable.

Any thoughts?

Thanks

My thoughts are, it’s an utterly awful idea to do that. If you really want to try to use a diffuse image to create apparent depth, run it into a bump map node instead. (It’s still not good, but using a diffuse image as a normal map is such a terribly bad idea that almost anything would be better.)

1 Like

Again, bandages, thank you for your input. You seem quite convinced. I’ll try to avoid it.

Kind of like comparing “South Park” to " The Mona Lisa", sometimes beauty is in the eye of the beholder.

Just for illustration of how geometry is represented in a tangent space normal map. Imagine based on this where the normals would be pointing if you would use a regular image as a input.

Let’s check, shall we?

Here are two renders. One has a normal map, one uses the diffuse image.

Which is which?

(Good luck. Sadly no prizes are available at his time)

Neither has normal map properly representing the geometry visible on the image. Question is how was the normal map made. Since it seems to be a photo, then the normal map is derived from the image. A method where you won’t get completley correct normals anyway.

There’s also lighting information (both diffuse and specular, and the latter is view dependant making it worse) visible on the texture, something you shoud minimize with physically based rendering.

Edit: Oh wow, I totally didn’t even notice you used the exact same picture twice. You might want to atleast upload multiple pictures next time so they don’t use the same url.

bandages, see below…sorry, above

I don’t care if you use the diffuse as a normal. Do what you want to do. It’s your image. You asked me my thoughts, I gave them, the rest is up to you.

If you want to see what your normal map vs what your diffuse-as-normal map do (and maybe compare with a diffuse-as-bump version) , load a 0 roughness glossy shader on those objects, and an HDRI world to give you something to reflect, and take a look. That’s how you can most clearly see the differences in various ways of normal mapping an object.

A normal map is an image representation of a surface’s vectors. Each color stands in for an axis, storing XYZ information as RGB. While experimenting is good it won’t take you far without understanding what’s behind it. I highly encourage you to read about it (1 2).

Now, keep in mind you’re likely talking to people who make a living out of 3D, so… they probably know what they’re talking about given other people are willing to throw money at them in exchange for their expertise and everything. What’s up with that attempted gotcha? That was needlessly hostile if not indistinguishable from outright trolling.

“attempted gotcha”?

Attempted wit… and a legit comparison

Sorry

Using a color map as a normal map produces some kind of result, yes… but it’s meaningless.

A simple illustration of this nonsense: the render asks a point in the surface which direction is the surface oriented in that point, and the shader responds “Cyan”. As the color ‘cyan’ uses also 3 components (RGB vs XYZ), the render takes that response as a valid answer and ‘understands’ that the direction is somewhere pointing between N and V…

Does a vector [0.0, 1.0, 1.0] has anything to do with the ‘cyan’ color ( also [0.0, 1.0, 1.0])? No.
They use the same ‘characters/signs/letters’ but they have totally different meanings.

About the difference between Normal maps (blueish) and Bump maps (grayscaled):
Calculating a normal based on a Bump map requires at least 3 texture lookups in memory, so it’s possible to calculate the derivatives in U and V directions.

Just using a pixel with 0.75 light grey, doesn’t tell you if the surface is sinking or raising… you need to check if the neighbouring pixels have a different value, and which direction results from their derivatives.

This poses not much of a threat now, but in the beginning of 3d computer games, calling a texture three times and then still the derivative calculation, was very expensive.

The solution appeared in the form of Normal maps, which instead of storing just some height, they would store the pre-calculated derivatives in both U, V, and W; needing just one call to the memory and the rest to keep in the fragment processors registers.

Nowadays, this has little impact in most scenes (specially for offline rendering), but games still have preference for Normal maps, as they have a fast performance.

OH CRAP!

Sooo sorry!

Fixed