Generating Normal maps tangent space directly in Blender

I mean somebody could actually PROGRAM it.

yes, that’s what I supposed, too…
:rolleyes:

S.

Really great nodes stuff! and maybe open some interesting ways to try to get the displacement map of a model (modeled via sculpt blender tools).

maybe…

Daniel

Well done! Nice trick until we get a real tracing algorythm for using different topologized meshes and UV-free highpoly. Really, thumbs up! Nodes make me wonder every day :slight_smile:

Hi Zelouille,

greate node setup. Please post your node to the "Blenderart Special Project: Material Node set-up ‘Cookbook’ ".

Good, that’s what you had to do ;).
I asked because i thought it a little too light.

Merci à toi :).
I’ve looked for, but it seems you can’t run the bake through a Python script.

If someone have an idea to automate that thing, just open the mouth ! :p.

Ok, i’ve didn’t understand.
Well, i’m sure it’s not so complex to program it. But i have no programming skills, i’m sorry. I’ll try to talk to some french developpers…

I’m trying to find another temporary solution to bake the displacement map…

Thank you.

I’dont think my solution needs to be here. It’s not really something you can “cook” each time you need it. The whole counts 58 nodes. You also need a specific texture, shaders’ options, and bake 3 maps…


PS : I just post a new version, 2007.10.24. Nothing new but the Bake / “normals” button is already checked now.

I think I’ve managed to bake my nmap_vert and nmap_pix, but I’m at a loss trying to understand what nmap_V is, or where it comes from, like what’s the V texture one has to input.
And another question: Would there be a way to modify the noodle so that it bakes a model space tangent map? We (vegastrike) presently have a rudimentary shader for normal mapping that assumes the tangent to point towards the right in the texture, which doesn’t work too well. We’d like to put a per-vertex tangent into the mesh, as a vertex color; but programming that is pretty difficult. If it were easy to modify this noodle to bake a tangent map, then all we’d have to do is make our mesh conversion tool read the tangents from the texture.
TIA

Well, the nmap_V save the Y axis (i call it V) of the UVMap for each face, in the global space (camera space).
It is used to know the orientation of the faces, because we need to know two axis at least. The normal vector alone can’t give us the orientation of the faces.

The V texture is just a flat green RGB = (0.5 ; 1 ; 0.5) that represent a vector XYZ = (0 ; 1 ; 0). it used to identify the Y axis of the UVMap.
This texture is set up in “Nmap TS” mode, so all the faces will be virtually flipped to face the Yaxis of their own UVcoords. That’s why we get the nmap_V at baking.

I know it’s confusing, but i’ve never succeed to explain that thing even in french :D.
So, there’s some drawings (taken from a msn conversation) :

fig1 : the uv map with U and V axis.
fig2 : the cube with seams.
fig3 : the U,V and Normal vector for each face.

I hope it will help you to understand.

I think i’ve didn’t understand this question… But i will try to answer it. Tell me if i’m wrong.

There are three space type in normal maps :

  • camera space,
  • object space,
  • tangent space.

I think you want a normal map object space. I’ve never used that sort, but in my opinion it is the same as camera space, but with the object orientation as axis reference, in place of camera orientation.
So, rotating the camera correctly, you should be able to bake this map directly.

First, try to clear rotation of the camera (Alt + R), then it’s not correct, try with some orthogonal orientations.
If the axis needs to be inverted, you only have to invert the color of the map generated.

If i didn’t understand you, just show me a map that is correct for your shader. Or try to do some sketches.
A picture’s worth a thousand words.

Even with the drawings, I still don’t seem to get it. In a tangent normal map the Y axis is the blue channel. I’m sure that’s not what you’re talking about. Okay, the nmap_pix and nmap_vert are the model-space normal for the fine and coarse meshes; So I suspect that nmap_V is then the tangent in model-space. If so, it’s precisely what I need, regarding the second question.

The V texture is just a flat green RGB = (0.5 ; 1 ; 0.5) that represent a vector XYZ = (0 ; 1 ; 0). it used to identify the Y axis of the UVMap.

This is where I’m completely lost. If it’s flat green, then it carries no information.

This texture is set up in “Nmap TS” mode, so all the faces will be virtually flipped to face the Yaxis of their own UVcoords. That’s why we get the nmap_V at baking.

Precisely, the most important question is how I produce this nmap_V. Is this something one bakes? All the original instructions say is to create a texture named V and to turn on the button for tangent space, but not where it comes from.

I think i’ve didn’t understand this question… But i will try to answer it. Tell me if i’m wrong.

There are three space type in normal maps :

  • camera space,
  • object space,
  • tangent space.

I think you want a normal map object space. I’ve never used that sort, but in my opinion it is the same as camera space, but with the object orientation as axis reference, in place of camera orientation.
No; not at all; we use tangent-space normal maps, like (just aboout) everybody else. The problem using tangent space normal maps, though, in a game, is that the GPU has no way of telling the rotation of faces on the texture.
This is not a problem for Blender’s renderer, which obviously deduces the tangent vectors on the fly; but it’s a problem in real-time, game graphics.
Either the information has to be passed in to the shader (e.g. GLSL shader; i.e.: the shader program you pass to the videocard); or else the shader has to be able to deduce it or assume it.
Deducing it is pretty much impossible in a shader because the hardware in the GPU would need access from each vertex to its neighbors. This is a planned feature for future GPU’s but it’s not there presently.
Guessing the rotation involves a rule of thumb. This is what we’re presently doing. Our game is of spaceship combat. We unwrap ships with special attention that the forward direction of the ship is UP in the UV unwrap. Our glsl shader then assumes that the tangent always points to the right in the UV map. Thus, for example, if I were to take one of the islands in the UV edit window and turn it around 180 degrees, the final result in-game would be that all prominences would look like depressions, and viceversa.
It sort-of works; but not very well. Suppose you are unwrapping a piece of surface on a ship that is back-facing or forward-facing… Which direction do you rotate it? There is no “forward” direction to choose as “up”, because, for a front- or back-facing surface, the forward direction is perpendicular to it.
So, what we want to do is pass the tangent vector to the shader, rather than have a rule of thumb that doesn’t always work, or work too well even when it does.
But passing the information has its complications…
Logically, if the normal vector is routinely passed along in the mesh, one’s first thought would be to include one more vector in the mesh, namely the “tangent vector”, which represents the direction, in the mesh, that would point to the right (positive X) in the UV unwrap. And this is something that is often done.
There’s a problem, though: It would make our engine incompatible with Apple computers. I’m not familiar with the details, but for some reason Mac’s don’t provide room for an additional mesh vector. They do, however, allow to pass a “vertex color”, so what we want to do is encode the tangent vector as a color and put it in the mesh as vertex color.
Our final problem is that we don’t know how to compute this tangent vector. It’s actually pretty tricky.
So, I was hoping that, since your noodle’s algorithm already has to somehow know the direction of the tangent vector, it could be extended to encode it as a color and save it to a texture. All we’d then have to do is read the color off this texture, for each vertex, and transfer it to the mesh as vertex color.

Even with the drawings, I still don’t seem to get it. In a tangent normal map the Y axis is the blue channel. I’m sure that’s not what you’re talking about. Okay, the nmap_pix and nmap_vert are the model-space normal for the fine and coarse meshes; So I suspect that nmap_V is then the tangent in model-space. If so, this texture might be precisely what I need, regarding the second question.
[/quote] The V texture is just a flat green RGB = (0.5 ; 1 ; 0.5) that represent a vector XYZ = (0 ; 1 ; 0). it used to identify the Y axis of the UVMap.[/quote]
This is where I’m completely lost. If it’s flat green, then it carries no information.
[/quote] This texture is set up in “Nmap TS” mode, so all the faces will be virtually flipped to face the Yaxis of their own UVcoords. That’s why we get the nmap_V at baking.[/quote]
Precisely, the most important question is how I produce this nmap_V. Is this something one bakes? All the original instructions say is to create a texture named V and to turn on the button for tangent space, but not where it comes from.

I think i’ve didn’t understand this question… But i will try to answer it. Tell me if i’m wrong.

There are three space type in normal maps :

  • camera space,
  • object space,
  • tangent space.

I think you want a normal map object space. I’ve never used that sort, but in my opinion it is the same as camera space, but with the object orientation as axis reference, in place of camera orientation.

No; not at all; we use tangent-space normal maps, like (just aboout) everybody else. The problem using tangent space normal maps, though, in a game, is that the GPU has no way of telling the rotation of faces on the texture.
This is not a problem for Blender’s renderer, which obviously deduces the tangent vectors on the fly; but it’s a problem in real-time, game graphics.
Either the information has to be passed in to the shader (e.g. GLSL shader; i.e.: the shader program you pass to the videocard); or else the shader has to be able to deduce it or assume it.
Deducing it is pretty much impossible in a shader because the hardware in the GPU would need access from each vertex to its neighbors. This is a planned feature for future GPU’s but it’s not there presently.
Guessing the rotation involves a rule of thumb. This is what we’re presently doing. Our game is of spaceship combat. We unwrap ships with special attention that the forward direction of the ship is UP in the UV unwrap. Our glsl shader then assumes that the tangent always points to the right in the UV map. Thus, for example, if I were to take one of the islands in the UV edit window and turn it around 180 degrees, the final result in-game would be that all prominences would look like depressions, and viceversa.
It sort-of works; but not very well. Suppose you are unwrapping a piece of surface on a ship that is back-facing or forward-facing… Which direction do you rotate it? There is no “forward” direction to choose as “up”, because, for a front- or back-facing surface, the forward direction is perpendicular to it.
So, what we want to do is pass the tangent vector to the shader, rather than have a rule of thumb that doesn’t always work, or work too well even when it does.
But passing the information has its complications…
Logically, if the normal vector is routinely passed along in the mesh, one’s first thought would be to include one more vector in the mesh, namely the “tangent vector”, which represents the direction, in the mesh, that would point to the right (positive X) in the UV unwrap. And this is something that is often done.
There’s a problem, though: It would make our engine incompatible with Apple computers. I’m not familiar with the details, but for some reason Mac’s don’t provide room for an additional mesh vector. They do, however, allow to pass a “vertex color”, so what we want to do is encode the tangent vector as a color and put it in the mesh as vertex color.
Our final problem is that we don’t know how to compute this tangent vector. It’s actually pretty tricky.
So, I was hoping that, since your noodle’s algorithm already has to somehow know the direction of the tangent vector, it could be extended to encode it as a color and save it to a texture. All we’d then have to do is read the color off this texture, for each vertex, and transfer it to the mesh as vertex color.

Even with the drawings, I still don’t seem to get it. In a tangent normal map the Y axis is the blue channel. I’m sure that’s not what you’re talking about. Okay, the nmap_pix and nmap_vert are the model-space normal for the fine and coarse meshes; So I suspect that nmap_V is then the tangent in model-space. If so, this texture might be precisely what I need, regarding the second question.

The V texture is just a flat green RGB = (0.5 ; 1 ; 0.5) that represent a vector XYZ = (0 ; 1 ; 0). it used to identify the Y axis of the UVMap.
This is where I’m completely lost. If it’s flat green, then it carries no information.

This texture is set up in “Nmap TS” mode, so all the faces will be virtually flipped to face the Yaxis of their own UVcoords. That’s why we get the nmap_V at baking.
Precisely, the most important question is how I produce this nmap_V. Is this something one bakes? All the original instructions say is to create a texture named V and to turn on the button for tangent space, but not where it comes from.

I think i’ve didn’t understand this question… But i will try to answer it. Tell me if i’m wrong.

There are three space type in normal maps :

  • camera space,
  • object space,
  • tangent space.

I think you want a normal map object space. I’ve never used that sort, but in my opinion it is the same as camera space, but with the object orientation as axis reference, in place of camera orientation.
No; not at all; we use tangent-space normal maps, like (just aboout) everybody else. The problem using tangent space normal maps, though, in a game, is that the GPU has no way of telling the rotation of faces on the texture.
This is not a problem for Blender’s renderer, which obviously deduces the tangent vectors on the fly; but it’s a problem in real-time, game graphics.
Either the information has to be passed in to the shader (e.g. GLSL shader; i.e.: the shader program you pass to the videocard); or else the shader has to be able to deduce it or assume it.
Deducing it is pretty much impossible in a shader because the hardware in the GPU would need access from each vertex to its neighbors. This is a planned feature for future GPU’s but it’s not there presently.
Guessing the rotation involves a rule of thumb. This is what we’re presently doing. Our game is of spaceship combat. We unwrap ships with special attention that the forward direction of the ship is UP in the UV unwrap. Our glsl shader then assumes that the tangent always points to the right in the UV map. Thus, for example, if I were to take one of the islands in the UV edit window and turn it around 180 degrees, the final result in-game would be that all prominences would look like depressions, and viceversa.
It sort-of works; but not very well. Suppose you are unwrapping a piece of surface on a ship that is back-facing or forward-facing… Which direction do you rotate it? There is no “forward” direction to choose as “up”, because, for a front- or back-facing surface, the forward direction is perpendicular to it.
So, what we want to do is pass the tangent vector to the shader, rather than have a rule of thumb that doesn’t always work, or work too well even when it does.
But passing the information has its complications…
Logically, if the normal vector is routinely passed along in the mesh, one’s first thought would be to include one more vector in the mesh, namely the “tangent vector”, which represents the direction, in the mesh, that would point to the right (positive X) in the UV unwrap. And this is something that is often done.
There’s a problem, though: It would make our engine incompatible with Apple computers. I’m not familiar with the details, but for some reason Mac’s don’t provide room for an additional mesh vector. They do, however, allow to pass a “vertex color”, so what we want to do is encode the tangent vector as a color and put it in the mesh as vertex color.
Our final problem is that we don’t know how to compute this tangent vector. It’s actually pretty tricky.
So, I was hoping that, since your noodle’s algorithm already has to somehow know the direction of the tangent vector, it could be extended to encode it as a color and save it to a texture. All we’d then have to do is read the color off this texture, for each vertex, and transfer it to the mesh as vertex color.
Such a feature might be useful to others in our same situation, btw.

Please try to edit your post instead of replying.
(maybe an admin can delete chuck’s two first posts)

Well, i think i understand this time. You want to bake the tangent vector in a global space.
It’s exactly what i do with the V_nmap, but it’s the other vector (binormal).

To bake the U_vector, there is what you have to do :

  1. Create a new image (1px² should be good), and fill up with RGB color (1 ; 0.5 ; 0.5)
  2. Load this image in a new texture. Enable the Normal map button.
  3. Set this texture on your mesh with Nor (negative) button enabled and Nor slider at 1.00 in the Map To panel.
  4. Enable the Nmap TS button on the Shaders panel
  5. Bake the new normals

If you want the face’s tangent, set your mesh hard.
if you want the vertex tangent set your mesh smooth.

(With a hard mesh, maybe you should use an EdgeSplit modifier to prevent some bugs of the baking)

The reference space will depend on the orientation of the camera. Try with no orientation (Alt R), or with the same as the object.

You can inverse the color of the map if you want to reverse the space.


But i’m surprised that GLSL doesn’t supports normal maps tangent space… i think you can upgrade your script.

Thanks a million, Zelouille; I had no idea that Blender could bake tangents and binormals. Made a little test and posted it here. I’d love to learn more about how it works. I’ll try the normal map noodle next.

Sorry about the triple post; I thought I was editing my post. (Probably all the logging problems I always have in this forum; whenever I’m editing a post, I try to submit and it tells me I’m logged out; I log in and still tells me I’m logged out…)

Hmmm… This doesn’t look right…
http://deeplayer.com/dan_w/WCUships/tangents/outsm.jpg
I’m doing something wrong, obviously, but I think I followed the instructions exactly. My feeling is that some step in the instructions is missing that some people know enough to figure out, but others, like me, don’t. (Well, there’s a lot of stuff in Blender I’ve never been able to figure out, like I was just trying to bake the V texture at 2048 and I tried like a dozen times, with Texture->New->2048… but it always came out at 1024 so I finally gave up and scaled the V texture up in Gimp. Three years working with Blender and I still can’t understand how the interface works…). Anyways, I started with a green 2048 texture, 0.5, 1.0, 0.5, made it Normal Map, Turned on Nmap TS, negated (yellow) Nor (right?), Normal gain at 1, and got this:
http://deeplayer.com/dan_w/WCUships/tangents/V.jpg
My other two textures look very similar to each other, as they should.
But somehow it’s not working…

EDIT:
BTW, it’s not a problem with GLSL; it’s a problem with bugs in Mac’s OpenGL. At least that’s what I’ve been told; --that for the sake of Mac compatibility we can’t pass tangents as vectors in the mesh, but must pass them as vertex colors.

So, here’s a test, having baked tangents as per your instructions:
http://deeplayer.com/dan_w/WCUships/tangents/tangent_test.jpg
This is the texture:
http://deeplayer.com/dan_w/WCUships/tangents/tangents.jpg
Now, the front of the ship is UP in the texture, so I would expect flat areas on the bottom of the ship to be 1,0.5,0.5 red, top flat areas to be 0,0.5,0.5 blue-green (tangent pointing in the direction of negative x), etceteras. Instead, I get, (measuring colors in Gimp), 0.88, 0.80, 0.42 on a fairly flat bottom area; 0.39, 0.48, 0.98 on a fairly flat top area; so I really don’t understand what’s going on. I checked colors in the binormal bake (V texture), and they don’t seem to make whole lot of sense either… :-/

Where’s the documentation for Blender’s baking of tangents and binormals, btw?

This is the texture:

Your tangent map looks good, but i can’t verify without your .blend…
If you need the tangent for your GLSL, i think you should use the tangent of the low-res mesh.


Hmmm… This doesn’t look right…

The normal map you try to generate with my node tree is bad. Did you try to use your tangent (U) map instead of the binormal (V) ? You can’t do this without modifying the nodes.
So, it’s simpler to use the V_map for this task.
Tell me where the “how to” is not clear.

If you don’t succeed again, try to do some test with the default object in my .blend. It’s already mutlires, sculpted and unwrapped. And i will be more able to judge the results.


Where’s the documentation for Blender’s baking of tangents and binormals, btw?

I don’t know, but it’s quite logical… if you rotate the normals with a nmapTS, you can get tangents and binormals too.

There is just an exemple with the cube in my .blend :

U_map is baked thanks to a red normal map TS (rgb = 1 ; 0.5 ; 0.5),
V_map is baked thanks to a green normal map TS (rgb = 0.5 ; 1 ; 0.5). That map we use in the node tree.
Maybe it can help you to verify your tests.

Good news, sort of; I got to bake the binormals right.

http://deeplayer.com/dan_w/WCUships/tangents/realVsm.jpg

I know it’s right because I got what appears as a right result from the noodle…

http://deeplayer.com/dan_w/WCUships/tangents/bettersm.jpg

The only snag now is that I got it right by replacing my mesh for the cube in your file and baking there; yet I don’t see what the difference is… Here’s my blend file:
http://deeplayer.com/dan_w/WCUships/tangents/test.blend
Everything appears to me to be set up identically, yet I keep getting wrong results from it…

UPDATE: Here’s the right tangents texture:

http://deeplayer.com/dan_w/WCUships/tangents/realUsm.jpg

The values are exactly right, now.

Got it to work by modifying your file, again (adding a U texture); but I still can’t get it to work with my original file, and still no idea what the different setting is… :-/