OK, I saw this video on YouTube for the new X-box 360 ‘Kinect’. Now this is a great thing for the X-Box 360, but my first thought (sad, I know), was how long it’s going to take for some enterprising young coder to hack it so that we have an in-home mo-cap system! :RocknRoll:
Basically it builds a field around the front of the unit. You do your hand motions and you don’t need a controller to do lightsaber battles*, steer cars and the such like.
Now, of course, if it is interpreting your movements and feeding it to a computer character, that is in essecense what a mo-cap system is doing. So how long before we have a Blender plug-in
*The Star Wars game will be kind of awesome. It will be like actually having the Force at your command!
It’s not just a camera, it has a depth sensor aswell. Together with is software it can reconignize and track human motion in 3d. I will be interesting to see what hackers will do with this device, a inexpensive mocap system is one possiblility.
Yes it’s an idea… but my guess is that the sensor doesn’t actually output anything that’s directly usable for mocap… I’d expect there to be a bunch of software that’s needed to make it practical. I sure hope I’m wrong and we’ll be able to mocap with it!
I was expecting for the Kinect to be launch because I’m pretty sure I could be use as a mocap system, so will be a huge tool for the animators. Remember that we actually can use the wii controllers, xbox controllers, ps3 controllers, and even the eyetoy in PC for many applications, so why should be different with Kinect?
Because the Kinect is probably not just a piece of hardware that outputs coordinates. In stead it probably gives a bunch of “readings” that you have to “interpret” with some really complex code that I think would be hard to get hold of… But again… I hope I’m wrong about that.
ive read an article about that system some time ago and if i remember right, it is actualy some kind of mocap system. it has a basic sceleton stored, detects your joints based on its position in space and maps them to the virtual sceleton.
so where is the difference to a real mocap system?
The real mocap system is very very precise, Kinect is not so precise but provides a huge tool to fast and cheap mocap recording, so even if it’s not 100% precise is better than having to set up keyframes manually, but after you have the motion record you always have to tweak a little bit by manual keyframes, that’s even in the most precise mocap systems.
So in order to use this for games on the 360, will you have to be running around your room or what? because I can see that you would run out of space pretty quick, like a game where you need to walk, will you just jog in place or what?
But it would be cool to use it as a mocap, I’m intending to buy it when it comes out, so if someone writes a program that can allow you to use it as mocap I will deffinitly use it. That would be so awesome, me being a bad animator that would make life much more fun for me.
There’s an API for the Kinect available, but I don’t know how easy it’ll be to get ahold of that with the hacked-for-use-with-the-PC Kinects. Lots of people are interested in doing this (I can name at least one who’s interested in doing this for Blender specifically) so I have pretty high hopes.
The simplification of animating characters would be kinda huge.
Well, if you can get the coordinates, then you should be able to generate a .bvh file I think… that’s all coordinates of joints (or am I wrong). Then when you import that into Blender… that can generate a rig for you.
Making your own rig that’s different in scale from the one the kinect sees is trouble, you’d get sliding feet and such. So it’s a better idea to generate one from the kinect data than making your own… For a clean result that is.
I think that’s the way to go… go simple. Don’t make it work 100% with Blender yet, just get a .bvh file.
well … the thing is that creating a .bvh file means recording the coordinates then importing them into blender…
the way it would works with OSCeleton now is directly importing the coordinate in realtime (BGE) in blender, then record them. (and basicaly this is what motivate me : realtime interaction)
but I’ll take a look on how a .bvh file is structured…