baking normals?

What is the difference between normal maps created with the Nvidia plug-in and the bake normals option in blender?

I’ve noticed when I use the bake normals option in blender with the object space set to object they turn out really greenish, and when loaded into a texture channel of a new material they are really intense and I’ve got to set the Nor really really low. I’ve also tried setting the object space to tangent but it just bakes to a straight solid blue image.

Any help would be helpful

normal maps are RGB-based, red for x, green y, blue z. If your map is green I guess you either has a lot of stuff on the y-axis or normals pointing the wrong way. You don’t have to do anything with baked normal maps (except saving them), do as you would with a normal map for rendering.

Maybe you’re talking about the different options in the bake tab - tangent, object, world and camera.

If that’s the case, game engines almost exclusively use tangent space normal maps. They’re the only ones that work for moving and deformable objects. Just use tangent space.

Thanks for the help guys, but I don’t think its solved anything for me.

the nividia plugin is used to create normal maps from a image if im not mistaken where was baking normal maps is used to make a low poly mesh look like it has the same details as a higher poly mesh. Like the above threads said, you want a tangent normal map, not anything else. With anything else when the object moves the details will get all screwy. hope that kinda makes sense.

page error

Below is a normal map created with the bake normals option set to object space, from what I’ve gathered so far is that these are not usable for moving and deformable objects, correct? So I will not be able to use these in BGE? In what situation would I use these kind of normal maps? and why are they different from tangent space normals? and could anyone perhaps explain why they are different color compared to standard normal maps?

http://img407.imageshack.us/img407/8080/normalviaobjectspacewu2.th.png

Here is how to do it. You need to have a high-res object and a low-res, which will be getting the map. UV-map the low-res to a new image, select both with the low-res as active, Bake - normals, selected to active, tangent. Save the image.
http://img201.imageshack.us/img201/5426/normalza3.jpg

Info from http://wiki.blender.org/index.php/Manual/Render_Bake
Normals can now be baked in different spaces:

Camera space - The already existing method.
World space - Normals in world coordinates, dependent on object transformation and deformation.
Object space - Normals in object coordinates, independent of object transformation, but dependent on deformation.
Tangent space - Normals in tangent space coordinates, independent of object transformation and deformation. This is the new default, and the right choice in most cases, since then the normal map can be used for animated objects too.

edit: sorry for the huge image resolution.

My best advice to you is to try it out. Apply your different normal maps to different planes, and look at them. You don’t even need to start a game, I think, just enable glsl and set your viewport to textured mode.

The explanation for why they look different is hard to make in simple terms.

The thing is that even though we can make out shapes in the normal map when viewed as an image, it’s not really meant to be looked at directly. It’s a clever way to store data that isn’t quite bitmap data in a bitmap. The reason why you want to do this is that graphics cards work very efficiently with images, so you can use them in games. A side benefit is that you can adjust them in Gimp or PS.

What the image really contains is precomputed data for light computation. The RGB channels represent different components of a normal vector. (Look up “surface normal” if that sounds strange). All this is technical, but you can essentially think of it as a 3D angle. The key point is that you need to decide which “master angle” to measure this relative to. It could be the global XYZ axes, the camera’s XYZ axes, the object’s own (local) XYZ axes, or each polygon’s own coordinate system.

That last one is tangent space, and it’s useful for something like a character. That way, he can, say, bend his arm without the light looking wrong. When deformed, his arm would be at a different angles to the world axes, the object axes, and the camera axes in comparison to the rest pose when the normal map was baked. That means it would look wrong. When using tangent space, the axes of the normal map would be “glued” on to each polygon of his arm, so the deformation would always be taken into consideration. The bumps would then follow his arm, and look natural.

Note that while the tangent space most often look blueish, you could also make world space maps that look blue. You could use any bake or a random RGB image as a normal map, but it would probably look strange.

I have a much better understand of normal maps now. Thank you everybody.

http://img301.imageshack.us/img301/4308/itworkedwa7.th.png

If I understand properly, would I use Bake on surfaces to make them reflect light better? Or is just totally something else?

In regards into the Game Engine…