Real time motion capture with a fake camera?

I was wondering the best way to go about doing the following with Blender (if indeed it can be done):

  • Do real time motion capture on a prop that acts as a fake camera. (The motion capture can be done via web cams or, say, an Android phone that transmits location/rotation data to the computer.)
  • Have Blender interpret the data from the fake camera as the actual Blender camera during real-time playback.
  • See what the scene looks like in Blender with the “moving camera” while I’m actually moving the camera.

Basically, I want to set up a scene in Blender with characters interacting but then play back the scene while recording data from the real life hand-held “camera.” It would be as if I were shooting the scene with the (invisible) characters in the room with me. I could pan and tilt toward anything in the shot, walk around the characters, even put the “camera” on a physical dolly or physical Steadicam rig if desired.

(Bonus points if I can control the zoom of the camera in real time!)

FWIW, I can set up a second computer to translate the motion capture data before passing it onto Blender.

I’ve searched the forums and it seems that some people have come up with various ways to do real-time motion capture in Blender, but I’m wondering what the most recommended way to do something like this is given the current state of Blender. Likewise, if there’s some big feature that will be implemented in Blender in a few months that will make all this very easy, I’d like to know what that feature is so I can keep an eye on it.


OK, from the lack of responses, I take it that nobody is doing this with Blender. I’ve decided to build my own virtual camera rig, similar to what was used in Avatar, Rango, and other motion capture films.

This video is a prototype application that I wrote to control Blender. It’s a proof-of-concept as well as a foundation for pulling in information from a Kinect and a wireless gamepad (both of which have yet to be implemented) and sending the appropriate information to Blender.

Once I have everything built and have the software written for this project, I’ll be able to walk around my basement with a real, mid-'90s, over-the-shoulder video camera and aim it at things in Blender’s virtual world (which I’ll be able to view in anaglyph 3D via a small monitor attached to the camera). I’ll be able to set up a shot in Blender and have, say, two characters talking to each other. I can aim my real camera at whoever is speaking, or walk around the scene with my real camera and capture the action – as if I were a cinematographer on a real set. And the virtual camera rig will record my moving around of the camera in real time as an actual recorded ‘performance.’ Besides camera location and rotation, Blender’s zoom lens and interaxial distance (the distance between the two cameras in a 3D camera rig) will also be able to be controlled and recorded on the fly.

I’ll be using a XBox Kinect to track the location and rotation of the camera, and a hacked wireless XBox gamepad (which will be wired up to the physical camera’s buttons) for the various camera functions. I’ve already ordered most of the parts and I’m waiting for them to arrive. If anybody’s interested, I can post updates on this thread.

Sound interesting to me. Don’t get discouraged when people don’t reply. Maybe once they see it, or see some results.
I am interested in doing real time things with blender, but didn’t have the time for it to dive into it. Also the costs are discouraging me to dive into it. But still interested.

Thanks, Rob. I’ll post here as I have more updates.

This would actually be really easy to do with a game engine like Unity and VR-like technology. The idea is that you want to capture the motion of the camera itself, and write it to an animation file that you can import into Blender an attach to a camera object.

The other possibility is actually capturing yourself holding the camera and moving around, and put that on a dummy character in your scene and parent the camera to a hand bone. You wouldn’t be able to capture the zoom, but you’ll probably want to animate the zoom and focus in Blender anyways. It’ll be a lot easier with everything in the shot. After that, you can mask out the dummy character from the scene, or just use a rig without a mesh.

Correct. In fact, if/when I do motion capture for characters, I’ll be doing it that way, where I’ll record the character motion files using some other program and then import things into Blender and then tweak the results. But the camera is different. I want the ability to react in real time to the “performance” happening in a Blender scene, along with the audio that’s playing back. The only way that can happen with an external program is if everything in a Blender scene were somehow exported to the external program (including sound)…which is too much hassle, and would probably make me question why I was using Blender in the first place if I had to export everything.

I really like tactile feedback when working with the computer, but I also want to make the workflow as frictionless as possible.

This is how I plan to wire up the video camera:

The XBox controller, XBox receiver, and Kinect unit came in today…whoohoo! If they all work properly, I’ll be able to start integrating these peripherals in the next few days.


OK, after a lot of trial and error…partial success! I’ve got the Kinect tracking the camera operator’s position and translating that to a camera position in Blender:

While it’s not shown in this video, it’s totally possible to record the camera position in real time in Blender.

It looks like I’ll need to incorporate an Android phone into the physical camera prop and use the phone for tracking the camera rotation. While the Kinect is great at tracking the position of human beings, it has no built-in capability to track objects. I dug into OpenCV and implemented some basic object tracking last night, but it looks like going that route will be too slow for real-time tracking.

In any case…location information is implemented! Now onto researching Android phones and scoring something cheap on eBay…

I received the video camera and ripped out the S-VHS tape mechanism to make room for the electronics I want to put in it:

Here’s the current state of the camera:

The S-VHS tape mechanism is totally gutted. All of the buttons I plan to use are wired up. Unfortunately, the XBox controller (or wireless receiver – I’m not sure which) that I received was defective, so I’m waiting to receive a second pair.

Oh, I want to take back something I said earlier in this thread:

Even though this qualifies as feature creep, it looks like it’ll be relatively easy to take the code that I’ve written so far for tracking the camera and use it to control armatures in Blender in real time. It might be a week or two until I get around to implementing it, but it’s in the pipeline.

Latest update: I’ve got the preliminary code to track up to two actors at a time and translate that to moving an armature in Blender. Also, the app now reads in the wireless gamepad and translates that input to commands in Blender.

I’ve torn apart the wireless XBox gamepad and have mounted it on the side of the camera so the joysticks are poking out:

And here’s the XBox wireless controller, mounted to the VHS tape door:

I have yet to wire up the gamepad circuit board to the buttons on the camera, but that’s next.

You’ll notice a bit of Velcro on the camera. I’ll need that for attaching the Android phone on the back as well as keeping the VHS tape door closed. Unfortunately, I’m all out of Velcro, so it’s time to go to bed.

The miniature monitor arrived today, so it’s now attached:

It uses quite a few AA batteries…

Following with great interest!!

Awesome, nice hack!

Thanks, Photox. The camera is basically done. I plan to make some tweaks in the future, but all of the basic functionality is there. Need to receive the Android phone so I can implement rotation.

Here’s 16 hours of camera modifications in 23 seconds.

I’ve completed the Android app that sends rotation information to the C# app – it’s all working nicely. I’ll post a video eventually, after I’m done fixing the game controller circuit board in the camera. (Long story short: I had to connect wires to some contact points on the game controller circuit board which were completely resistant to solder. So I’m using conductive glue to attach the wires…and conductive glue takes a lonnnnnng time to dry.)

On the character motion capture front, I’ve decided to split out that functionality to a separate app instead of handling both camera and character control in the same C# app. Here’s a short video of me controlling a Blender armature’s left arm in real time via the Kinect. The image on the right is from a Kinect avatar programming demo from Microsoft, which I’ve altered so that it sends character information to Blender.

Steve Wozniak said, “Build one to throw away; you will anyway.” I didn’t have to throw the whole camera away, but after having unsuccessfully experimented with solder and silver paste, I replaced the entire XBox controller circuit board tonight and attached the wires correctly with the new conductive paste that arrived in the mail. 24 hours from now, the conductive paste should be dry and I’ll finally have my virtual camera ready to go.

I’ve got to fix and refine some stuff, but the camera is basically working:

The camera is complete! It works as well as I hoped it would. After some real world experimentation, I ended up going with the following layout:

It takes a little bit of practice to get the workflow down (e.g. remember to toggle computer tracking on/off before recording or playing back a camera performance), but operating the camera is starting to feel pretty natural.

A couple explanations about the buttons:

  • The Operator Tracking On/Off button allows me to operate the camera on a tripod and not have my position tracked by the Kinect. However, rotation information via the Android phone is still tracked. So it’s just like operating a locked off camera, but in the virtual world.
  • Room scale is a factor that the camera movement distance in real life is multiplied against in Blender’s virtual world. As you lower the room scale, one footstep in the real world becomes twice as big in the virtual world and vice versa.
  • One of the XBox joysticks in the camera acts as a crane, and the other moves the camera to any location on the ground. (The joysticks are not present in this picture, but you can see it on the video I posted.) When you have it set that the computer is tracking the camera position and/or camera rotation, the joysticks are deliberately shut off. The joysticks are just meant to get yourself into the general vicinity of where you need to be for the next shot.
  • In my original diagram, I planned to control the interaxial distance between two cameras in a 3D session. I ended up dropping that because it’s impossible to effectively analyze 3D on a tiny monitor – especially when the end product will end up on a movie theater screen.

I ended up sending keyframe data to Blender every 83 milliseconds, which equates to a keyframe being written every other frame at 24 FPS. If I try to send a keyframe for every frame at 24 FPS, Blender eventually chokes and starts skipping frames. I’m not sure if the issue is on Blender’s, Windows, or my custom app’s side, but essentially recording at 12 FPS and playing back at 24 FPS and having linear interpolation between the keyframes feels very smooth and Steadicam-like when you play back the animation.

I need to make another video of all of this in action. Right now, the previously-posted video (without the fancy zoom, crane, tracking on/off, room scale, undo, and rewind button features) will have to do.

Although the camera is done, I still plan to be able to control Blender armatures via the Kinect. I’ll post updates on this thread as they happen.