Motion Tracking

Hey every one, this is a newby, but don’t underestimate my imagination:eyebrowlift2:. Lol. I have a question to all of the Blender users either Newby or Pro Levels. The question is, can you use regular motion tracking and than apply that to your model to animate it? I’m not talking about motion tracking a location or a ground, i’m talking about tracking points on a human body, and using that(meaning importing that script the way you usually do in blender) to apply it on to your human or animal or what ever body, to animate the way the pints move? If the answer is yes, or someone just said “o my God, that is an awesome idea” and kind of knows how to do it, can you pleas tell me, can you do it with Icarus? If the answer again is “yes”, than can someone put a tutorial to help everyone out?

what you mean is motion capture. yes it can can be done with BVH files.

http://wiki.blender.org/index.php/Doc:Tutorials/Animation/Advanced/MoCap

http://www.centralsource.com/blender/bvh/what.htm

Thanks, sorry about that.

Yes but can you do it with Icarus?

i dont believe so. icarus tracks fixed objects that dont move and triangleates them. it tracks over space, not time. time is irrelivant to iccarus. when something moves icarus see it as a bad track point and delets it. iccarus is old software, pixil farm bought it and turned it into pftrack which i think can do motion capture in 5.0 but it’s expensive.

I’m a grad student in an academic research group that does a lot of work related to motion capture.

What you’re looking for is called Optical Motion Capture. It typically uses an array of IR cameras tracking retroreflective markers in fixed positions on the body. You need at least two cameras (usually more like 6+) and some pretty heavy duty software. The cheapest solution I’ve used is called OptiTrack, by Natural Point (http://www.naturalpoint.com/optitrack/) – it costs about $5,200 for a basic setup, which includes 6 cameras and software. You provide your own camera mounts, etc. The Natural Point software can recognize user-created rigid bodies (ie, “put some markers on a prop”) and one person at a time, wearing a fixed markerset. I’ve also used a much more expensive system (Eagle/EVaRT by Motion Analysis, Inc), which is higher-framerate, higher resolution, much more flexible (can arbitrarily define marker sets for articulated models), but also significantly more expensive (haven’t seen the invoice for our 5-year-old Eagle system, but I’m under the impression it was in the mid six-figures).

I’m not aware of anyone distributing free or open-source optical motion capture software, although I’d be happy to be proven wrong!

Anyway, you typically calibrate the system (use a fixed “wand” and “l-frame” to define the space, so the software can determine the positions of all the cameras in space), then put markers on a person, and capture the move. Most of these packages can export the data as BVH, which contains bone position/angles, and then apply that to a rig. Your rig must be built to be articulated at the same points as the mocap model.

Another student in my research group is working on markerless motion capture, but it’s very experimental and I don’t think anyone is selling or distributing open-source solutions to do this yet. You’d still need several cameras, and probably a studio with controlled lighting conditions and a uniform background. Give it a couple of years, though. =]

If you don’t have the resources to own your own mocap studio, you may be able to rent time, and there are a few different websites that offer BVH files to download and play with (google!)…

Hope this is useful. I haven’t had a chance to use our gear for animation yet (we primarily do realtime analysis/feedback, eg, stream marker positions over the network and analyze) but I’m a pretty experienced mocap operator, so I’ll try to answer any questions I can.