Normal mapping and resource use

I’m slowly learning what the BGE is capable of, and my background in the other components of Blender isn’t that good, so I’m trying to build up best known methods as I go so that later on, I don’t have to backtrack and undo a lot of things.

I just learned about UV unwrapping + normal maps and am very pleased with the results, but I’m curious how resource intensive it is for the machine. I mean, if I were happy with how things looked with a normal map and wanted to do that with everything visible in a game, would that just be absolutely brutal for the computer?

From my understanding, normal maps reduce the poly count significantly, but that might not mean much if the machine is still having to calculate out how the light hits those millions of fake polygons, right? Or am I just being paranoid about performance? Thanks for any help you could give me.

I dont know much about detail about normal maps, but in my experiences, it doesn’t seen to affect the performance.

Lower numbers of polygons reduce the CPU usage, more extensive use of normal maps increase the load on the GPU. It’s all a matter of shifting the load. Generally speaking a reasonable graphics card should be able to handle normal, specular, diffuse and stencil maps on all objects without trouble. It all depends on the target system.

Lower polygon count frees CPU time for handling logic and physics so is a very good thing for anything but the simplest games.

Vertex shaders handle mesh deformation on the GPU, fragment shaders handle things like normal/spec/diffuse maps. I’ve read about future graphics cards having geometry shaders too that will enable mesh editing such as creating or editing geometry on the GPU. This would leave the CPU completely free to do only the things that it was designed for such as handling complex branching logic.