Question: can blender do TWO camera motion tracking to provide depth? (or three… ?)
Example: if I take video of myself from the front and the right simultaneously, the front camera would provide Blender the X-axis and Z-axis tracking, while the side camera would provide depth with the Y-axis (and Z-axis).
Atissue: While I can easily load in the two views and track points (and track both cameras in 3D space), I do not know how to tell blender that the object tracking points in the front video are the same points as the object tracking points obtained from the side video.
IF this could be done, we could do full body motion capture in blender - an not just the facial tracking we see over-and-over on YouTube.
Hope I’m not posting in the wrong place, but I can’t find a dedicated “motion capture” section.
This should really be in the “Compositing and Post Processing” sub-forum.
So we are asking Blender to create a 3D representation of a moving object with two motionless 2D cameras? And we want to end up being able to combine the data of a 6 or so cameras to achieve a (non-live) motion capture stage like this?
to my knowledge, Blenders Motion Capture features do not yet support ‘witness cameras’ you may need to try Syntheyes (or similar) and then export the data into blender
You can add a nother empty and give it a copy location X and Z and copy the location of your first track and give the empty a nother copy location Y and copy the location of your 2. Track… but i would recommend to use a addon like that: [Addon] Mocap with Multiple Cameras