Here is a beginning attempt at a normal map generator. This script is meant to generate a normal map for a low polygon object from a high polygon version. The low polygon version must be uv mapped and be entriely inside the high polygon mesh. Sadly blender’s normal maps can’t be mapped from uv it seems. Because this can’t be used in blender im not certain these are actually generated correctly, I will be working on a little app to test it but I think my actual calculations are probably wrong. I am doing high_poly_normal - low_poly_normal and then encoding that as the color which is probably too simple. Most of the hard work is already done tho so if anyone has a good reference on what to do please reply, or can confirm what im doing is wrong/right.
Anyways thought I would share what Ive got to get some input on it.
for one thing, it’s slow… I kinda wish I could say how much
it has been going for 20 minutes
looks like it is only maybe 25% done, and I’m trying to do only ~8000 quads
from the meshes you put in there it looks like the normals aren’t the correct colors and you don’t do anything about the edges of uvmapping [are you re-normalizing?]
also, your note to apply scale and rotation implies you aren’t doing anything to account for them, so could differences in location be an issue?
I’d have to think about how the math should work to get it right…
hitting submit now, I’ll ramble more later [I hope]
__ edit __
it has been nearly an hour now, seems mostly done
As I have chores to do I will probably not return until it has completed.
then, we’ll see how well it has done
yep, and hour or so… looks completly wrong…
off I go again
a tangent space normal map [the blue one] specifies how the normal should be changed relative to the interopolated normal. It should be of unit length and in textel space [up is blue, right is green, down is red… I think, depends on the implemtation]
Yes it is very slow, I am working on an octree to make the hit collisions faster. There is a hit test for each pixel in the uv image being generated brute forced against every polygon in the high resolution. The uv lookups are using a quadtree so at least that speed up is there but its definitely quite slow. Even with this I doubt it will exactly fly.
As for the code I think I am encoding the rays correctly as a color, I think the ray I am calculating is the issue. Ive been looking for info on the actual ray calculation. The rays I am encoding are normalized im not sure what you mean about the edges. I’ll keep looking for more info but haven’t found much.
I really hope you succede with this. A normal map generator is a necessity for my project, and I’d love to be able to do it all in Blender. If you need any testers or anything let me know and I’d be more then happy to help.
Well encoding the normal directly I know wouldn’t be correct except at one angle. I had a debug version that created a mesh of all the rays I cast and the end points so I am pretty confident those aspects are correct.
z3r0 d : The question is how I find the offset and represent it as a vector. I mean the dot product could give me the angle between the vectors but not in the directions so thats obviously not what I want. A simple subtraction isn’t correct either. My only other thought is to use trig and solve along each axis. Is that what I am supposed to do?
the normal of the highpoly mesh is one vector, and you have 3 vectors from the tangent space on the low poly mesh
x normal on the texture [you have to rotate this considering your uvmapping and mesh] dot hipoly normal is the red value, y normal onthe texture dot highpoly normal is the green value, z dot hipoly normal is the blue value
this should result in [assuming unit length normals all around] a unit length normal.
also, you need to be sure to interpolate normals across your lowpoly mesh [and highpoly mesh]
z3r0 d: After doing a fair bit of research what your said makes some sense. My question for the tangent space question is this. Tangent space is made up of 3 vectors on the low polygon mesh. 1 vector is the normal, and then the other 2 vectors are the texture space vectors… the vector along u and v I assume for that face… My question is lets assume I am using only triangles for simpllicity, can I just use the two unshared edges for the triangle as the 2 vectors for that. Or do I have to find the two vectors, if so how do I?
you need to find those vectors based on the uvmapping of that face
also, because the face is not necearily mapped 1 to 1 [it may be squashed in one direction for example], so you’lll have to interpolate across the verticies
to keep things simple, do start with triangles. Also, consider that the normal should be interpolated across the face
You’ll probably want to keep the vectors unit length, but I’m not sure you’ll want to keep them orthagonal [all perpendicular to each other]
good luck, I have some ideas about how to do this completely differently I may expand on when I find a nice spatial partitioning technique
ok, well the method I envisision for finding this then is to find the uv point of a vertex(maybe the center would be best)… then I can advance 1 pixel along the image x y and remap to 3d and subtract to form vectors, normalize and then do my calculations. I will have to worry about those new points having to be in the same face tho… so its going to be messy from what I think.
I think it would be much easier to let blender’s renderer do the work [as my texture baking script does], but the technique I’m thinking about still isn’t trivial
here goes
essentially I’m thinking about mostly unfolding the highpoly mesh [like my texture bake script does to the lowpoly] into the uv map of the lowpoly mesh
EXCEPT
some kind of height has to be preserved, this will make the normals of the resulting mesh no longer all stick straight up and will allow a normal map to be created [or a height map, which can easily be made a normal map]
also, this will leverage blender’s zbuffer [though I will have to do depth calcuations on my own, so that for example the front side of an arm doesn’t get normals from the back] resulting in even less work
however, new verticies will have to be added where edges would be sliced by edges of the low poly mesh which lie on uv seams [so that the highpoly mesh can be properly unfolded] and perhaps at any face in genreal.
This would probably work better with a highly tesselated highpoly mesh, and probably could be setup to even allow procedural textures [including those that would generate a normal map] to be baked [again, with same restrictions as previous texture bake].
the problem however is interpolating the distances between highpoly and lowpoly and spatial partitioning [so that proper highpoly faces are associated with proper lowpoly faces]
I have thought about this a lot… but I really haven’t started coding yet