I have a rigged human character in blender and am able to access all the global bone coordinates and bone lengths. However, I want to translate all this bone coordinate data (in 3D) to a camera frame (in camera coordinates).
The world_to_camera_view words great to get the 2D camera coordinates for each bone. But I am struggling a lot to get that z coordinate in the same scale.
The test to make sure the z coordinate is scaled correctly is to make sure the bone lengths do not change when the camera is rotated. (Length of bones must be normalized to, for example, SpineLength = 1 unit to avoid bones changing when zooming in/out).
So far I have not been able to achieve this: I have tried directly scaling the z-coordinate from world_to_camera_view, converting the 0-1 2D Coordinates to global units through trigonometry by calculating the camera aperture width distance at a certain z depth, and a few other geometric methods. I feel like there must be a simpler way to get that z coordinate from the world_to_camera_view to be in the same scale as the x/y coordinates.
Any help would be much appreciated! Thanks in advance