I was wondering the best way to go about doing the following with Blender (if indeed it can be done):
- Do real time motion capture on a prop that acts as a fake camera. (The motion capture can be done via web cams or, say, an Android phone that transmits location/rotation data to the computer.)
- Have Blender interpret the data from the fake camera as the actual Blender camera during real-time playback.
- See what the scene looks like in Blender with the “moving camera” while I’m actually moving the camera.
Basically, I want to set up a scene in Blender with characters interacting but then play back the scene while recording data from the real life hand-held “camera.” It would be as if I were shooting the scene with the (invisible) characters in the room with me. I could pan and tilt toward anything in the shot, walk around the characters, even put the “camera” on a physical dolly or physical Steadicam rig if desired.
(Bonus points if I can control the zoom of the camera in real time!)
FWIW, I can set up a second computer to translate the motion capture data before passing it onto Blender.
I’ve searched the forums and it seems that some people have come up with various ways to do real-time motion capture in Blender, but I’m wondering what the most recommended way to do something like this is given the current state of Blender. Likewise, if there’s some big feature that will be implemented in Blender in a few months that will make all this very easy, I’d like to know what that feature is so I can keep an eye on it.