I am trying to develop a python script which essentially performs what would be the practical inverse function of the “apply rotation” function – that is it looks through the scene and identifies all of the duplicates of a <selected source object> (all of which have been rotated by varying amounts and then later on had their individual rotations applied, resetting all of their individual rotations to (0, 0, 0)). What I want to do is to essentially re-link all of these meshes to the single origin mesh, without losing the final placement and rotation of the duplicate-unlinked-meshes. The source and duplicate meshes all have a scale of (1, 1, 1).
For simplicity’s sake during development I’ve been implementing an alternate version of this same problem wherein I rotate the duplicate objects to match the placement/rotation of the <selected source mesh> instead doing it in the opposite direction, I’ll flip the directionality of this later on once I have this alignment issue worked out.
My strategy to do this is the following:
My first step is to loop through all of the meshes in the scene and to match them based upon their count of verts, edges, and faces. (Complete and working)
Second step is setting the translation of each of the duplicate objects such that the center of a shared given polygon (I just used the polygon with index 0) are all aligned to be at same centered-face-point on the <selected source object>. Let’s call that location the <PP> (pivot point). (Complete and working)
I then use the normals of these matched faces to calculate an angle and rotation direction which should make the duplicate objects’ face normal point in the same direction as the <selected source normal>. This rotation is applied at PP, which should in theory then make the two meshes point in the same direction.
My problem is that the final rotations are still off. When I try to align the mesh faces manually with the add-on MeshAlignPlus (which performs roughly the same functionality) I end up getting the same rotation values as what my script produces when given the same starting rotations – so I don’t think my implementation thus far is incorrect, but rather my strategy.
From this page, it seems like what I’m doing wrong here is just orienting the given z-axis (the duplicate face normal) to match the source face’s normal instead of matching it in all three dimensions. That is, I’m ignoring how much the duplicate object should be rotated about that (now matching shared-face-normal-axis) such that each of the edges forming that polygon would be parallel to their counterparts in the source mesh (which if matched would in turn make the final rotation match in all three dimensions)… Right? That doesn’t sound too bad? Just finding one last angle to rotate the face normal about to make everything sync?
Or is it more complicated than what I’m thinking?
I believe this post also describes my problem. How it describes addressing the problem, by arbitrarily generating extra orthogonal axes of within a rotation matrix based upon the axes you already have, which sounds like the problem I’ve outlined above. Unfortunately its implementation of how to do this (I think calculating each XYZ axis individually) goes straight beyond what my understanding of what underlying mathematics are going on; and thus how to translate it to my current implementation of the problem.
Am I way off here? Any pointers or solutions anyone could provide me would be greatly appreciated!