[Addon] Mocap with Multiple Cameras

Overview
Optical MoCap systems use two or more calibrated cameras to track markers on a mocap actor or prop. The tracking data, and the known location of the cameras allows software to determine the 3D location of each marker by triangulation. I wanted to try using Mocap, but I didn’t want to buy a Kinect or other hardware.

Blender’s tracking abilities have improved so much recently that it seems that it’s standard functionality can be used to do most of the calibration and tracking functions required for a mocap session. It just needs the bit that uses the tracking data from more than one movie clip to locate the markers. I thought someone must have done this, but I haven’t been able to find any examples (please tell me if I’ve missed this!)

It sounds pretty easy compared to what Blender already does. I have somewhat limited scripting skills, but I’ve made a ‘Triangulate’ addon to do this function. Before running the addon, it’s necessary to have set up two or more movie clips with tracking data for the markers on a mocap actor. There must also be matching 3d cameras in a 3d scene which accurately match the location, rotation and lens/sensor data of the real cameras which produced the movieclip footage. There are some rules relating to the naming of the elements in the scene covered in the documentation.

The ‘triangulate’ addon will combine the movieclip tracking data with the 3d camera locations, and will add keyframes to a series of empties to make them match the 3d position of the actor’s tracking markers.

The Empties can then be used by the 3d artist to drive an armature, or any other purpose. The built-in ‘Motion Capture’ addon can also be used to retarget the captured actions to a target armature or rig.

DownLoad:
The Zip file contains the Triangulate.py addon file and a documentation text file.

Triangulate.zip (6.44 KB)

Edit: Now updated for Blender 2.8
Github page with download links

Demonstration:
I’ve made a half-hearted demonstration video that might give you some idea how this works. It’s a bit quick to be a tutorial - I hope there’s enough detailed information in the ‘Documentation.txt’ file in the download to explain exactly what’s required. Also, the video refers to the name of the addon as ‘CamCap’. After I did this I thought ‘Triangulate’ was a better name as it’s by no means a full MoCap solution.

Note that the setup of the cameras in the 3D scene to match the real cameras is the critical step, and this addon doesn’t do this for you. I’ve started to think about a calibration function to help align the cameras to one mater camera that might make this easier.

3 Likes

I might just go over this bit again!

Before the addon can be used, for each real camera you use, you have to put a camera object into a 3D scene, and they have to have a location, rotation, and lens data compared to the real camera.

My original thought was to use Blender’s tracking functions to do this, and I’ve briefly shown this in the video. However, it now seems not accurate enough and too much of a hassle to do this. I’ve found it better to physically measure the location of each camera in the real world, and build a simple 3D virtual set to match the real set, including the cameras. If you make sure the cameras are mounted with level horizons, that means you just have to move the 3D cameras to make the virual set elements match the real video. This is pretty easy and more accurate.

Also, the addon uses the names of the movieclips to match them to the 3D cameras. The names are case sensitive and must match exactly. For example, the movie clip called “Camera1.mov” won’t match with a camera object called “camera1”

The addon looks at every Empty in the current scene, and tries to find at least two tracks within the MovieClips with the same name as the empty. This is also case sensitive. It doesn’t matter much if the tracks are occluded and not tracked for a short period within the Movieclip. Keyframes are only added to the empties for frames where there is valid tracking data.

Would it be possible to put down coloured tape on the ground in the shape of a square and then use BLAM to calibrate the cameras individually?

Hi blazraidr, using BLAM is certainly a possibility. Although I’ve used BLAM a bit, I can’t remember if there’s a scaling option. You’d have to do the calibration on all cameras, and they would have to scaled so that the both cameras see the marked out square as the same physical size. I guess you would have to match up the squares from each camera to the same physical space manually as well.

I’m finding that measuring the camera position is easy, and there’s almost no calibration work to the the next time if you use the same space and put the cameras back in the same location. I’ll post another quick video to show this.

Here’s another short demonstration. I used two cameras - a 550D SLR and a Samsung S3 phone. They are located fairly close to each other. I measured the location of the cameras and built some simple geometry which can be used to setup the pan and tilt of the virtual cameras to match the real scene, and you can check the lens assumptions.

I’ve also had a try at mapping the movements to a standard Rigify rig. (I’ve yet to do the head, feet and hand movements). The results seem not too bad considering there is no special hardware.

Wow man, this looks really good! As you said, considering there’s no special hardware or software the results looks pretty solid.

Could using a third camera (on the back maybe) help in getting a more fluid and accurate capture?

Looks really promising. I know witness camera support for the tracker was proposed before Gooseberry took off. But on the back burner now.

Hi julperado, in general, if both cameras could see and track the markers, I had a pretty accurate 3d track. Extra cameras would help to ensure that at least two cameras could see every marker all the time. For example, in the setup I had, the actor couldn’t turn around, or even lift his hands up without the markers being lost to view. Having said that, one more camera at the back wouldn’t really help as it’s unlikely it could see a marker at the same time as a camera at the front.

The next setup I was thinking of trying was to add a GoPro mounted high up and looking down at around 45 deg. Alternatively, one camera at the front and one each side but still a little in front. That way, markers on each side of the actor could be tracked by the front and side cameras. This would allow a lot more movement.

The more cameras the better, but the more work and the more room you need to have too.

Hi 3pointEdit, yes I figured something like this would find it’s way into Blender eventually, and done better than I can do it. As I said, it feels like all the hard work is already done.

I wonder what your neighbours think seeing you all day long doing that stuff on the porch :smiley:

Seriously though, looks really good! Results already look very promising.
However manual camera calibration is the weakest point here. Is BLAM capable to resolve multiple cameras in respect to each other?

Hi Kilbee - I do much stranger things than that, so the neighbours are used to it.

BLAM is great for reconstructing geometry from a still, and working out the focal length, but I think more accuracy is needed to get the camera positions.

My current method of measuring the camera locations and using some simple 3d geometry to line up the pan and tilt seems to be working, and is very little work after the first time.

I’m also having a look at some auto calibration methods.

Hello,

great work!
I plan to test the addon with my BMPC 4K as main camera and i will looking for a second camera.

What about replicate markers behind or beside the body in the same positions?
In this case we will be able to put more cameras around the actor.

Any progress on this?

This add-on seems to work pretty well for the most part. Thanks for writing it. I’m having a bit of a problem, though. Only a little over half of my tracking data gets converted to keyframes. My entire frame range is tracked, but only some of the frames end up applied to the empties. Has anyone else run into this problem?

Cool stuff! I lack some knowledge in tracking to actually give this a try but I’ll keep it in mind for the future.

love this!

How did this not attract more attention? This is awesome with a capital ‘Oh’. Any update on how this has been going on?

GSoC included this as a formal student project this year. You might check it out http://graphicall.org/hlzz001

Does that mean this is going to be merged into Blender proper?

This could save so much time in animation, even for character animation as a starting point. Awsome.

Thanks for your awesome work!
Is that possible to host the code on github so we all can help improve it?