DIY motion capture setup

Researching and aggregating various motion capture solutions.

Marker Motion Capture using normal cameras

1. Facial Motion Capture with 1 camera

I found some videos on YouTube with people doing facial motion capture with just one camera, here.

2. Body Motion Capture using 2 or more cameras for triangulation by nerk

3. Body Motion Capture using 2 or more cameras for triangulation by Benjy Cook

4. Body Motion Capture using 2 or more cameras for triangulation by Tianwei37

  • You can use different cameras as long as you setup the cameras in Blender with the same focal length, etc. I am looking at cheap action cameras which usually offer higher frame rates like 720p @ 120 frames per second or 1080p @ 60 fps.

5. Body Motion Capture using Sony PSEyes and IPISoft

The Sony Playstation Eyes or PSEyes are very cheap and are supported by some softwares like the IPISoft suite. However, the software can be very expensive when using multiple cameras.

Motion Capture using accelerometers

1. VIVE Trackers and/or VIVE HMD and/or Hi5 VR Glove with IKINEMA Orion

2. Xsens MVN sensors and software

Markerless Motion Capture using depth cameras

1. Kinect v1, Kinect v2 or Creative BlasterX Senz3D with NI-mate

It uses Xbox One Kinect v2 with a program called NI-mate, which has a Blender addon. It outputs it’s data in Blender to empties, which requires setting up a skeleton copy their movements.

Video tutorial by Remington:

2. Kinect v2 with Kinector and Manuel Bastioni Lab

It uses Xbox One Kinect v2 with a program called Kinector. It also has a Blender addon and works with Manuel Bastioni Lab rigs. It can do finger tracking and facial motion capture too. It works without any setup with Manuel Bastioni Lab characters.


3. Kinect v1, Kinect v2, ASUS Xtion and PrimeSense Carmine 1.08 with IPISoft

Site here.

Compatibility of USB Controllers with Kinect v2

Kinect v2 needs USB 3.0 and Microsoft says only Intel and Renesas controllers are supported.

NEC and Renesas are the same. Renesas µPD720200 and NEC D720200F1 are the same and work.

ASMedia controllers don’t work at all or have issues. ASM1042A is reported as not working. Some have said they have gotten it to work by using the Window’s driver instead of ASMedia’s driver (they may have been using a newer ASM chipset, covered below).

Etron controllers don’t work. More reports here.

Running other USB devices on the same controller could cause Kinect v2 to not have enough bandwidth, resulting in lower frame rates or an error saying there isn’t enough resources available. It has been reported that some USB 3.0 PCI-E cards get lower frame rates while others are steady at the max of 30fps. In my experience, the frame rate may be low at first but eventually it gets up to 28-30 fps. I read that Kinect v2 requires 3 Gbit/sec in reservation but only uses about 1 Gbit/sec. PCI-E 2.0 x1 has enough bandwidth.

USB Hubs
It has been reported that using a USB Hub can work. More reports here.

USB 3.1
I was looking at USB 3.1 cards, most are x4 but all of the ones I found use an ASMedia Controller. One person said that the ASM1142 is working and another said it had issues. You may have to switch to the Window’s driver. I also read that the USB 3.1 cards don’t actually use all 4 lanes, instead they use x1 for PCI-E 3.0 and x2 for PCI-E 2.0. Nothing on ASM2142.

Multiple Kinect v2s
A few people have gotten multiple Kinect v2s (up to 5) working by having multiple USB 3.0 PCI-E cards. Each Kinect v2 needs it’s own controller. Most softwares don’t support multiple Kinect v2’s though. The OpenKinect libfreenect2 driver supports multiple Kinect v2s but not the Microsoft driver. There are PCI-E cards that have more than one controller, such as a 4-port with dual channels or quad channels and they use the supported Renesas µPD720200 controllers. They are more expensive, it would be cheaper to buy two separate cards. I did not find any confirmation that PCI-E cards with multiple controllers work.

Transcend TS-PDU3 2-port PCI Express USB Adapter (multiple reports, multiple websites)
SIIG 2-Port Dual Profile PCIe Adapter with SuperSpeed USB 3.0 (JU-P20612-S1) (reported here)
Rosewill 2-Port USB 3.0 PCI Express Card Components RC-505 (multiple reports in Amazon reviews and me)

You see the little triangle at the bottom left hand corner of your message? Click on that and type the reason to move the thread. An admin will come over and move it.

Moved from “Artwork > Animation” to “Support > Animation and Rigging”

1 Like

I am currently looking at buying several Thieye i60e which are below $70. The video is good quality, microphone is terrible, it can do 720p at 120fps which is probably good for motion capture, or 1080p at 60fps. It also has WiFi with mobile app and support for SD card up to 128GB. Seems like it could get the job done. I found cheaper options but they look sketchy.

Added Kinect options to first post.

I’m not sure the number of skeletons the Kinect v2 can track simultaneously, it’s at least two but I think four would be pushing it. It’s the easier setup as of writing. With either approach, occlusions (likely considering the example) will cause the tracking to interpolate and or guess joint positions. This will very likely require clean up. If you plan the shot, actor paths, timing etc, you could shoot one at a time at the best angle and just merge the anims in blender. Preview them before wrapping up whatever you do.

If this is live action you can use tracking data / markers to composite cg elements into shots, bypassing some of the above joys.

I read Kinect v1 was 2 skeletons with 20 joints each and the v2 is 6 skeletons and 25 bones each.

I was just thinking about the SWAT scene, like you said, if I did the Kinect head on and did each SWAT member separately, it would probably be okay.

moved to first post

My open source Kinect motion capture toolkit, please test it on Windows if you have a Kinect for 360 sensor.

Sorry, don’t have a v1/360 sensor.

Just saw a story that Microsoft won’t be manufacturing any more Kinects.

i am tracking speech with a python script openning a v4l device from a 5dollar webcam running at 5fps, I am also tracking 3d skeleton of myself with that same camera using rings on my joints with another small python script to track the orientation of them in 3d space. of course all of this is simply difference of color as long as there are no other objects in the scene of the same color it’s easy.

So wait, you have a ring around your joints and the ring is different colors and your script can determine which way your joint is rotated using the color of the ring that it can see?

perhaps this would help
the only problem with this effect is tracking to the character a camera so distance from the camera is constant and tracked providing less overhead to changing of what I call “Quadrature” or pieces of elements in which 2d-arc is determined.

I got my Kinect v2 and Rosewill USB 3.0 PCI-E 2.0 x1 card. Everything is working! I was able to get NI-mate to send the data to the NI-mate Blender plugin, which created empties representing each joint. I used Remington Graphics’ tutorial which has you setup a Capture Armature to copy the empties using Stretch To constraint and then a Retarget Armature that uses the Copy Rotation bone constraint to copy the Capture Armature. The results on the empties and capture skeleton look right, but the Retarget Armature isn’t copying it right. Remington has a blend file with his setup and I don’t see any differences.