Motion capture system [openHardware] [WIP]

Hi everyone.
What I want to show you is not precisely an addon, and is no yet released :yum:
During the last years I’ve been working in a human motion capture system: A hardware-software framework that is able to capture and digitalize the human movements and apply them to a digital armature.
During development I heavily relied on Blender as a 3D sandbox for testing and visualising lots of intermediate steps, and now I’m developing the first official client for the system as a Blender addon that will be capable of visualising, registering (as armature actions) and retransmitting to another program the movements of the performer wearing the suit.
I began the development because i wanted such a system for my own 3D creations, so I think here someone will find it interesting.

All the framework will be soon released with an open licence, so users will only need the will to build it in order to start capturing.

Here’s more info: http://chordata.cc
And this is a video of a live performance that uses the system. What you see in screen is a custom software built with openframeworks, receiving the output of the blender client addon.

Hope you like it, more news to come…

3 Likes

wow this looks interesting! I will have a better look at the website, but from the first look I couldn’t understand if you intend to create also the hardware or if that will be completely up to the user to take care of.

1 Like

It would include hand posing?

I couldn’t understand if you intend to create also the hardware or if that will be completely up to the user to take care of.

The idea behind this project is to allow anyone (with or without previous knowledge) to build his own system. All the building blocks are already created, and the information will be available, is up to the user to put them all together.
For example on the hardware side: I’ve created a sensing unit based on inertial sensors (like the ones that make smartphones aware of its orientation in 3D space). It can be arbitrarily connected to other “siblings” to create a capturing suit. I call it K_CEPTOR

At this moment I’m working on the “user experience sugar” to facilitate the process of building the system from all the parts. Once this step is finished I have to figure out the best way of making the hardware available to users. There will be always to option of downloading the source and building it from scratch, but I think most users would prefer an easiest path.
So, I would like to know the opinion of the Blender community: Do you people think that having the opportunity of accessing a DIY kit containing the PCB and the components would fit your needs? This way it will have to be finished at home by hand-soldering the components, which is not a hard process at all, but it requires some dedication. Of course all the instructions would be available.

The positive aspects of this kind of distribution are:

  • The final cost get reduced significantly!
  • In the process you get to learn something about how this technology works

It would include hand posing?

Modularity and adaptability are two of the key concepts in the philosophy behind this project. It is capable of capture any kind of jointed structure, not only the human body. If you manage to put the sensors all over a dog and create a dog-shaped blender armature you should be able to capture it’s movements. This of course applies also to the human hands.
The only caveat at the moment is that the current K_CEPTOR is a little bigger that what it should be in order to allow all individual fingers to move.
I have of course plans of shrinking it in the future, but for now I’m concentrating on making the system easy to reproduce as it is.
Been an openhardware project if you have some electronics knowledge you can always modify the design yourself in order to make it fit on the fingers, or maybe someone does and then share it…

I might give it a try. I’d like to sharpen my soldering skill anyways :smile:

What is the expected date and price for such a DIY project?
How does the system interface with a computer running Blender? Does it require a wi-fi network?
How do you deal with drifting/sliding that occurs with IMUs in general

Also I always wondered why don’t such projects use Flex sensors for the bending of the hand? Do they not work well for such purpose? or maybe they’re expensive.

really interesting! Unfortunately I can’t say I have a great experience in DIY, but it looks like a solution to take into consideration when I want to get close to the motion capture world. I will keep an eye on this! :slight_smile:

Al the sensing nodes are read by a software running in a SBC, which then sends the information through a networking protocol (OSC). So the client pc (the one which runs blender and the addon) should be in the same Local Area Network. The easier way to achieve it is to connect the SBC to your home Wifi, or use your telephone’s hotspot.

To be precise they are not IMUs, but MARGs, so they include gyro, acelerometer and magnetometer. I’m using a very popular and stable sensor fusion algorithm created by S. Madgwick which gives good results even in long captures. For example in the video I posted below you can see that the results are consistent during a performance of about 5 minutes…
Other sensor fusion algorithms can be easily applied to the system.

:sweat_smile: that the harder question.
It was born as a personal project, and I’m working on my spare time on it. I would like publish a Beta release by august.
The price part is even harder, it depends in many factors starting on the possibility of distributing it as a DIY kit. I need to gather as much opinions as I can on this aspect, so please to anyone reading this: tell me honestly if you as a Blender user would be willing to dedicate yourself to soldering these things or you would prefer another solution.
What I can say is that I want to make it cheaper than other mocap suits available in the market.

And… Flex sensors for the hands… yeah they are a good idea and are not expensive at all. Perhaps you loose some precision, but depending on your needs it can be enough.

When the system is available just give it a try! perhaps it sounds scary but you don’t really need previous experience to do it

So does that mean, in theory we can record the motion capture session to a phone ? (by using a special app on the phone) Would working away from magnetic fields help with magnetometer workings? or maybe your system is already capable of correcting / estimating the magnetic field error on sensors.

I watched the video and as you say, it looks very consistent but I couldn’t tell about the floor contact and if there is any foot sliding.

Why not give options for anyone interested?
-Purchase fully soldered. Ready to roll
-Purchase just the kit. With “How to” docs.

Options are good because you can widen the audience and answer anyone interested. But if giving options makes the project harder than I can’t say anything.

Personally I’d be happy to have a kit to solder, but I can see some people being intimidated by that, even though it isn’t really all that difficult.

Suggestion: offer kits. If there is enough interest and money, you could later offer assembled parts with an appropriate price for the labor involved.

I also think the kits are the best idea to start with.
Right now offering more than one option would be a little complicated.

Absolutely. The protocol used for transferring the captured pose information is completely unobfuscated, so writing new clients for visualizing, or registering is really simple (assuming that one have access to a 3D graphic library supporting skinning in the target platform, but nowadays most platforms have at least one). In fact I started working on a client for mobile based on Three.js, but at the moment I prefer putting all my efforts on making the Blender client solid and usable.

It’s true that these kind of systems are easily disturbed by ferromagnetic objects or electromagnetic sources on the proximities, so yes: performing the capture away from this kind interference is better.
This system at the moment doesn’t perform any runtime correction on the magnetic readings, but nowadays the IMU/MARGs research field is in fast expansion, and new algorithms go out everyday. It shouldn’t be hard to implement one of these magnetic-correction methods.

A curiosity about inertial capture is that it doesn’t directly deliver translation data, but orientation data instead. Meaning that what you are really capturing are the inner relations of all the limbs with the body’s root (normally the hips).
Again, there are lots of methods for estimating space translation, but none of them is completely accurate. In the video there wasn’t any translation, the hips were always anchored to the center of the scene.

Really cool project. Might not be that long in the future where we can all have very inexpensive mocap gear.

1 Like

Hi! today we’ve launched a new video explaining all the features of Chordata, where you can quickly get an idea of how the system works, and how you will be able to build your own.

We’ve also launched our new website, which features links to all of our social networks. On those channels we will be publishing more regular updates, so if you are interested on the evolution of the project be sure to follow us.
We hope you enjoy them!

We are also working hard to have a first public release by this summer, so expect news soon!