X-Box 360 Kinect controllerless controller = mocap system?

OK, I saw this video on YouTube for the new X-box 360 ‘Kinect’. Now this is a great thing for the X-Box 360, but my first thought (sad, I know), was how long it’s going to take for some enterprising young coder to hack it so that we have an in-home mo-cap system! :RocknRoll:

Basically it builds a field around the front of the unit. You do your hand motions and you don’t need a controller to do lightsaber battles*, steer cars and the such like.

Now, of course, if it is interpreting your movements and feeding it to a computer character, that is in essecense what a mo-cap system is doing. So how long before we have a Blender plug-in :stuck_out_tongue:

*The Star Wars game will be kind of awesome. It will be like actually having the Force at your command!

What kind of field? isn’t it just a camera?

You’ll have to place sensors all around you to track your movements in 3D, else everything hidden by other bodyparts will not be tracked.

I foresee a lot of people dancing around their lounge covered in ping pong balls

It’s not just a camera, it has a depth sensor aswell. Together with is software it can reconignize and track human motion in 3d. I will be interesting to see what hackers will do with this device, a inexpensive mocap system is one possiblility.

Yes it’s an idea… but my guess is that the sensor doesn’t actually output anything that’s directly usable for mocap… I’d expect there to be a bunch of software that’s needed to make it practical. I sure hope I’m wrong and we’ll be able to mocap with it!

I was expecting for the Kinect to be launch because I’m pretty sure I could be use as a mocap system, so will be a huge tool for the animators. Remember that we actually can use the wii controllers, xbox controllers, ps3 controllers, and even the eyetoy in PC for many applications, so why should be different with Kinect?

Because the Kinect is probably not just a piece of hardware that outputs coordinates. In stead it probably gives a bunch of “readings” that you have to “interpret” with some really complex code that I think would be hard to get hold of… But again… I hope I’m wrong about that.

xna? The API will need to be exposed for Xbox programmers and it shouldn’t be too difficult to convert and transmit these calls to usable custom parameters?

I hope you’re right! I so do! I have access to a mocap suit, but it’s just too much work to use that… so this would be a great alternative!

ive read an article about that system some time ago and if i remember right, it is actualy some kind of mocap system. it has a basic sceleton stored, detects your joints based on its position in space and maps them to the virtual sceleton.
so where is the difference to a real mocap system?

The real mocap system is very very precise, Kinect is not so precise but provides a huge tool to fast and cheap mocap recording, so even if it’s not 100% precise is better than having to set up keyframes manually, but after you have the motion record you always have to tweak a little bit by manual keyframes, that’s even in the most precise mocap systems.

So in order to use this for games on the 360, will you have to be running around your room or what? because I can see that you would run out of space pretty quick, like a game where you need to walk, will you just jog in place or what?

But it would be cool to use it as a mocap, I’m intending to buy it when it comes out, so if someone writes a program that can allow you to use it as mocap I will deffinitly use it. That would be so awesome, me being a bad animator that would make life much more fun for me.

Hey guys, should this thread be revived? Since the “3d scan with kinect” folk didn’t like the people more interested in mocap hijacking their thread…

I’m seeing lots of interesting stuff happening, even one commercial app offering mocap with kinect, but nothing yet that seems to work cleanly/simply.

There’s an API for the Kinect available, but I don’t know how easy it’ll be to get ahold of that with the hacked-for-use-with-the-PC Kinects. Lots of people are interested in doing this (I can name at least one who’s interested in doing this for Blender specifically) so I have pretty high hopes.

The simplification of animating characters would be kinda huge.

you should try to install the OpenNI drivers and OSCeleton, then with some python with pyliblo you can quite easely get the coordinates of the joints…

I don’t have a proper model to test with but I think I could help with the python part… if someone has a skeleton to share corresponding to the kinect detection scheme, it could help.

Well, if you can get the coordinates, then you should be able to generate a .bvh file I think… that’s all coordinates of joints (or am I wrong). Then when you import that into Blender… that can generate a rig for you.

Making your own rig that’s different in scale from the one the kinect sees is trouble, you’d get sliding feet and such. So it’s a better idea to generate one from the kinect data than making your own… For a clean result that is.

I think that’s the way to go… go simple. Don’t make it work 100% with Blender yet, just get a .bvh file.

well … the thing is that creating a .bvh file means recording the coordinates then importing them into blender…
the way it would works with OSCeleton now is directly importing the coordinate in realtime (BGE) in blender, then record them. (and basicaly this is what motivate me : realtime interaction)

but I’ll take a look on how a .bvh file is structured…

It shouldn’t be too hard (i.e., easy) to drive an armature in realtime… almost exactly what I’ve been working on with my artoolkit experimentation.

If I wasn’t much too poor to afford a kinect I could probably have something working in a couple hours but for now it looks like I’m stuck with a (borrowed) webcam…

Has anyone tried this?

It mentions it exports to blender, does it work?

There is a simplified version of Blender called Miku Miku Dance, and it has an interface with Kinect. I think there was MMD to Blender importer somewhere…