Tangent vs Object Space vs World Space

http://www.surlybird.com/tutorials/TangentSpace/index.html

According to the above article, and according to my logic (impeccable), running tangent normal maps is simply a waste of power in many cases. This may be old news, but to me, it was quite a surprise, so forgive me if my information is useless :slight_smile:

World Maps: Least power, can’t deform, can’t move.
Object Maps: 2nd least power, can’t deform, can move.
Tangent Maps: Most demanding, can deform, can move.

Seems to me running tangent maps on, say, a sword (which will never deform), is a waste. My computer cannot run GLSL in game engine, but can in the viewport, where it seems like object space will work in game engine. So why don’t we use it?

Tangent space allows the highest quality normal-maps, as in you get to actually set the normal map strength to max without dramatically changing the shading of the rest of the material.

Actually if you want really good normal map effects you pretty much have to use tangent space.

@Sammaron: Interesting link there. I guess that if a scene has extensive use of normal maps then appropriate use of normal map space could improve performance. Possibly the reason that tangent space normal maps are used almost exclusively is that it is fairly hard to go wrong with them since they will produce correct shading in all situations.

From what I read, it seems that world space normal maps have very few applications. Where objects are linked into a scene then object space normal maps could be used, but world space normal maps would need to be recalculated for each object that is placed in a different orientation to the base model. This would be quite labour intensive and would increase the number of normal map images in memory.

I’d like to see a performance comparison between say a city scene using tangent space normal maps on buildings versus one with object space normal maps.

Big thanks, bookmarked.

Object Space generally produces better results than Tangent Space. Tangent Space is still dependent on the original normals of the model, so if you’ve got weird smoothing, it’ll look weird too with Tangent Space. That’s why you have to add bevels for sharp angles. Object Space isn’t bothered by this at all, so it’ll be cleaner. But since TS can be used for everything and OS is situational (as it can’t deform), a lot of engines simply don’t support anything but TS. Another drawback is that every polygon needs unique UV space with Object Space, so they’re not as texture efficient as Tangent Space, where you can just mirror (unless you use Blender, grumble).

So for as far as I can tell, you can use OS normals for a sword, but a lot of people don’t bother since it requires a bit of a different workflow, can’t be mirrored, and those few extra polygons you need for correct shading don’t really matter all that much these days. Darkest of Days uses OS normals for all its weapons, I believe.

Also, please ignore what Cyborg Dragon said about normal map strength. It should always be set to 1. Unlike a bump map, a normal map uses normals, which are directions and directions can’t have a strength. Setting it lower than 1 will only make Blender ignore the normal map for a bit, and there isn’t any good reason to want to ignore normal maps.

Lastly, here’s a good piece of information on normal maps.

For the record, it is possible to set at least a tangent space normal map strength to 0.40 or 0.078 (or any value less than one) with the normal map effect noticable in the Blender viewport. I speak from experience, in some cases like walls it is useful, like you want a strong normal map effect on red bricks than you use that same map and set it to half strength for yellow bricks which are the red bricks with a different material (the yellow color being from using No RGB)

I also speak from experience that the normal blue’ish normal maps give a bit different shading to objects if you set them to world or object space, and some programs have that type as the only normal map type option.

You can mirror normal maps in blender. I just did it?

The value between 0 and 1 sets how strongly the normal map impacts the texture making the effect more or less subtle.

In the file I have attached is three planes. I made a plane, halved it. UV unwraped and baked normals then added and applied a mirror modifier. I then duplicated it and it’s light (set low so it doesnt overpower the effect) and made materials differing only in a gradient of normal map strength. 0.25, 0.5 and 1. I could be missing your point but if not you are wrong in at least these two points

Attachments

Normal_map_stuff.blend (422 KB)

Normal maps replace the original normals. You can’t half-replace them. The result is somewhat like everything in the high poly having only 50% depth for a .5 strength. It’s a bit worse than that, because what’s ‘behind’ it is the pure gouraud shading and since you don’t use hard edges with normal maps, that isn’t always pretty, often not so.
A sphere at .5 Nor strength is like a sphere that’s squashed to half its depth; why not just make the actual highpoly squashed, then? At least if you do that, you can still add full depth spheres next to it, whereas with a .5 Nor, you simply eliminate the possibility of some angles. Anything other than a strength of 1 is not an accurate representation of the high poly. If you bake from a monkey, the lighting on the normal map with 1 is (shadows excluded) exactly how the actual high poly monkey would light up. And since you can change the highpoly however you want, just change that if you’re not happy with the results, since at least then your normal map keeps the integrity to add a full depth object if need be. This isn’t a value you should play around with like on a bump map.
I suppose Cyborg Dragon’s example would pass most of the time, but even so, you a) flatten everything, which may not be realistic, b) gouraud shading will show through and c) you can’t ever add something with full depth. It’s a poor man’s solution and practically never yields the same results as a new normal map baked from a softer highpoly.

You wish, good sir! Everything that’s dented out on the right side is dented in on the left side. The brightness of the angles in the middle is the same every time, which is impossible, because they should be mirrored (and one should face the light while the other should face away from it). Try it with something where you can actually see the results better ; I made a monkey version, everything but the monkey is your mesh. The top version is a mirrored normal map, the bottom is a real monkey. The left side of the top monkey looks to be lit from the left, even though the only light is far to the right. It doesn’t immediately look jarring with the grain on a plane you posted, but over a whole character or something, this is, of course, completely unacceptable.

Attachments


@Zwebbie: At first, I thought you were wrong. I have certainly set many normal maps to lower than 1! However, your point is well taken. I believe you are saying that you are simply lowering the detail in your normal map, blurring it and making everything more of an average, loosing all the subtleties that can be found in a true-baked map. Is this what you were saying?

I saw that article as i was on my search for mine, and i never really looked into it. I appreciate the link, it’s a good one :slight_smile:

@Mirroring (the idea): I don’t do it. Why would I? I find it much easier to do both halves. Not only has it become habit for me, but it also avoids any issues that can come with it (secretely merging vertices, screwing with tangents as i now know…). So that’s not really an issue with me.

@FunkyWyrm: Good point. I don’t think global maps could really come in handy, except to optimize a whole level, but that’d be a pain. It really depends on how much extra performance I would get from it.

I guess i might as well start using object baked maps for my rocks and such. Minor changes in workflow are fine, especially if it will help to speed up the engine at all. Thanks for the replies :slight_smile:

It’s a poor man’s solution and practically never yields the same results as a new normal map baked from a softer highpoly.

But what if you need the same normal map for all sorts of similar materials and you need each one to be a different strength, not packing a similar looking normal map for each strength would reduce the file size (by megabytes if all were high-res).

This may be much less of a problem with modern games and 3D scenes but it does matter if you’re using one of those filehosts with limited max. file size for example.

Ok, I think I understand what you mean now. But as far as adjusting the normal length goes. Perhaps you want to have the same map implying different amounts of depth of normals or you made a normal map from an image and it’s too strong. I sill see a good case for adjusting the value to get the appearance you want.

Having only one image and repeat it around with different strength also saves you ram memory, which consequently makes your game run faster. Right? Or maybe blender will just copy all your modified textures to the memory as if they were different textures anyway, so it would make no difference? Anyone knows?

I’d love to know how the bge manages memory with textures. I always wonder if it’s more efficient to have a small texture tiled, or a big texture not tiled and similar things.

That’s a good question. I’d assume it treats textures that are the same, the same. Memory wise, i mean. I see no advantages to having a texture being put in memory for each different use of it. Load it once, and be done. After that, it’s just a matter of which UV’s it goes with. Right?

It’s not actually blurring, but it’s an inaccurate representation. When bump mapping, a lot of artists don’t really pay attention to the real values of the bump map - light grey sticks out, dark grey dents in, but the amount of this, they’ll just figure out in Blender. Because what does a brightness of 255 mean anyway?
In normal mapping, the worth of the values isn’t something you can slide around. The RGB values are XYZ coordinates for the normal to point to, and changing them changes the normal you’re aiming for. The big difference here, of course, is that normal maps are generally baked and bump maps aren’t. So if you want your normal maps to look like the highpoly, keep the value at 1. And if you want stuff to be flatter, the ‘background’ shading will be more accurate if you just bake from a flattened highpoly than if you flatten it through the Nor value. Lastly, you simply lose the possibility of some angles. Say you’ve got your map, set it to .5. You decide, then, that you really want to add a 45 degree slope somewhere on your normal map. But since it’s all only .5, you can’t, because only a 90 degree angle would become 45 if multiplied by .5, and it’s impossible to bake 90 degree angles. You lose any angle above 44 degrees if you set it to .5, so any Nor value lower than 1 can’t display a proper sphere, should you want one.

I - obviously - haven’t done much with experience with values lower than 1, so I can’t say for sure that it’ll work, but I’ll concede that using the Nor strength for such things as Cyborg Dragon described does sound like it’d work if you want to save textures.

Well, suit yourself, but you can save yourself a whole lot of texture space by mirroring stuff. Here’s a model that’s almost completely mirrored, and as a result, he got twice as much texture detail as he could’ve gotten without mirroring (while keeping the 2048 limit). Look at those fancy details!

I’m not programmer, so take all of this with a grain of salt and don’t quote me on this, but I believe the main thing to slow renders down these days is the amount of DrawCalls, one of which is added for every Material and texture. Smaller textures are always easier on the memory, of course, but if you can have one material and a 512^2 texture, that’s quicker than four materials and 4 256^2 textures. The problem is, of course, that if you pack the textures for four objects into one texture and only one of those objects is seen on screen, you’ll still have your texture for the other three in memory, which can quickly add up. Again, though, that’s just how I understand it.

3 Likes