Alternative ways of animating using standard PC hardware

Like most people, I animate with a keyboard and mouse, rotating bones directly or using Auto-IK for limbs. But it’s a slow and painful process when making long animations, and I’m wondering if there are other ways too. I don’t have wearable sensors like the big animation studios, but I do have a pressure sensitive a tablet, a gamepad with two thumb sticks, as well as a standard (but high resolution) webcam. And maybe even the keyboard and mouse can be used with a different animation technique!

So I’m wondering if there are any addons or hidden features that offer alternate ways of animating, using standard PC hardware (drawing tablet, gamepad, webcam, etc). In other words, anything apart from rotating bones or dragging them with Auto-IK.

One way I had in mind is recording animations dynamically. As the timeline plays, you are able to drag a selected bone across the screen, keyframes automatically inserted when appropriate. For example, if I wanted to animate a walk cycle: I’d select the foot bone, click “play” in the timeline, then move the mouse back & forth - left & right to simulate the foot traveling through the air. Or using my joypad with its two thumbs, I could select both foot bones and animate walking for both at a time. Of course you can only animate in 2D with a standard sensor, but functionality could be extended to allow buttons that push / pull the bone in the third axis.

Another way is using a pressure sensitive tablet to paint the movement of bones; You can click and drag any bone with the pencil. Left - right and up - down move the bone in 2D view, while pressure pushes / pulls it toward the camera. Once you remove the pencil from the tablet, the new position is inserted X frames from last frame (where X is how many frames you want to paint the animation).

Here’s yet another way: Body tracking, and following the movement of a person from a video. It could use the same system as camera tracking, only that each track follows a limb and is translated to an armature bone. Obviously, the video has to be bright and clear, and the actor must prevent body parts disappearing from the camera’s view. To make it easier, people could wear bracelets / collars / belts of a distinctive bright color, which would act as tracking points.

There are likely even more ideas that can be thought of. Not only that this would make animation times easier, but it should also make it more realistic. Since being directly involved in the movement of the bone has a more natural and fluid result than programming bone positions at each frame. So does anything even related to those concepts exist for Blender yet?

Look in to Motion capture with Kinect. People are trying out bunch of techniques:

Thanks, but not an option for me. I don’t have a Kinect, and even if I could afford one I don’t know if I’d want it. One reason is that I’m a Linux user and wish to avoid proprietary drivers and technologies, while the Kinect doesn’t look like something Microsoft will “allow” Linux to support anytime soon. It also seems like an Xbox device, and I don’t have or will buy one of those.

Can’t something similar be done with a normal webcam too? Or even better, a video the user can load and process in Blender. Yes, it’s harder without sensors… but like I said the user could wear distinct clothing that would allow tracking body parts more easily. I do camera tracks frequently in Blender, but didn’t learn object tracking yet. I assume you can already track the body, and it shouldn’t be impossible to bind armature bones to tracking points either. Anyone tried this so far?

There are some programs that can track stuff like colored balls, human faces, or LED patterns in 3d using just a single view, and feed the information into virtual joysticks and stuff close to realtime (usually there is a bit of delay if you’re not using top of the line hardware; and depending on the lighting conditions, camera quality etc, the values may be a bit noisy). And there are ways to feed the input from such input devices into Blender, but i don’t think it does stuff like that by itself, you’ll need some scripts. Unfortunately i don’t got anything memorized or bookmarked at the moment, you’ll have to look for those programs and scripts yourself.

Or if you don’t need to do it interactively, you could just simply film yourself moving stuff around, and use motion tracking on the footage (might need to look into some tutorials and stuff to see what does and doesn’t work well with Blender mocap tools).

@TiagoTiago: Yes, that’s what I was thinking of. It doesn’t need to be done in realtime, but rather from filming myself and tracking objects I’d be wearing for easy detection.

IIRC addons are scripts, but I don’t know if anyone made such an addon yet. And like I said, I’m also wondering how possible this is with the current camera tracking system. Where I could add one or more tracking points per limb, then bind armature bones to those points. Has anyone tried this, and does know of such an addon otherwise?

If it’s just motion capture after the footage has been shot, just about everything you need is already in Blender; the only thing left is for you to find/build some nicely trackable things to move around. IIRC, for full 6dof tracking, for each thing you wanna track Blender needs at least 8 points that don’t lie on the same plane and aren’t easy to confuse with the background (or anything else nearby). You should look into some camera and object tracking tutorials if you can’t figure out how to use it by yourself.

Ouch… that’s harder than I thought then. 8 tracking points per limb sounds way too much. I’d need to attach some special large marker to my hands, feet, torso, and head in real life… which I can’t really imagine working out :stuck_out_tongue: I do camera tracking a lot, but never tried object tracking yet. If you need 8 markers for object tracking too, this is likely an impossible method.

This is really interesting, I like the idea of using a camera to record motions then map those motions to an armature. This would require two cameras (not mandatory but gives better results) side view and front view. This would make motion capture far easier to work with and cheap for everyone. Check out the link bellow. Look at the red Lines in the run cycle animation, Imagine if blender can track these red lines as markers then map them to a corresponding bone. The reason why i used an animal reference is because i would like to point out one very important thing, and that is, it can be very hard or next to impossible to get motion capture of some animals. This tool would help big time in creating motion capture for animals that are hard to work with and the ones that want to eat you, LOL, but if it was possible to extract still images from your favorite live action movie or Cartoon, then draw those lines on the image as markers, then have a script map each line to a bone. The possibilities would be endless. 3DS Max and Maya users will be forced to use blender just so they can use this tool to animate quickly and easily.

Currently you can’t combine motion capture files from multiple sources, but with this idea that i am talking about, i believe you can combine bvh files with no problems.

This method definitely has its own advantages over how current motion capture is done. Since still images can be extracted from videos and used to generate 3d animations, it would be the most popular way to make and share bvh files I think.

Where are all the coders?

You need 8 points for compensating lens distortion and for full 6dof tracking. For just 3d position tracking (without lens distortion compensation) you might be able to do it with 2 cameras shooting from orthogonal directions; it’s relatively easy to get 2d tracks for each point from a single view.

You can actually get 3d-ish tracks per point from a single view, and possibly even rough rotation (possibly up to 360° on the axis aligned with the view angle, and less than 180 on the orthogonal ones) as well, depending on the motion model you use on the the tracker points and what you’re tracking. But moving away from ideal conditions the quality of the results goes down quite fast.

Prepare to buy either multiple Kinects, multiple PSEyes, or lots of webcams. Your aversion to using anything proprietary or closed is kind of silly and is only going to hurt you as an artist.

Just seen a video which instantly brought me back to this question:

It shows how the face of a character is posed using a video of a real person’s face. The author paints several dots on their face with a black marker as the tracking points. Apart from easy face animation, this can also be used to get a perfect lip-sync.

Can Blender do this too, using the current camera tracking system? If not I hope someone considers adding this.

By the looks of it, it’s pretty much just 2d tracking driving the mixing of shapekeys. Shouldn’t be too hard if you know what you’re doing.

The hardest part would probably be building your own headmounted camera rig for the performance capture (without that, your actor would have to keep their head very still; or there could be significant cleanup work involved in order to get clean-ish readings for all the dots).

edit: could be done with classic rigging instead of shapekeys as well; or even a mix of the two to get both the flexibility of bones and the manually crafted details from shapekeys

Keeping head still is preferred. If not, wearing something on your head that holds the camera should be easy.

Any tutorials on how it’s done in Blender 2.6, and how to translate the tracks to face bones or shape keys?

It’s the head rig, the movie Avatar was made with that technology:

If the actor is just gonna play it for the camera, without turning their heads away much, all you would need would be a fancy hat, like this guy:

I’m not sure if he tracked the whole face in 3d, or if he just used the head motion to “dewarp” the position of the face markers and tracked those in 2d though. Probably the latter.

You can probably find many on Youtube

Been intrigued to look into this again today. Although most answers so far are informative, I haven’t seen any tutorial about how to do it in Blender precisely. Does anyone explain how motion capture can be done in Blender 2.6x?

I did find a Blender specific tutorial earlier, about how to capture full facial animation. But the method described there doesn’t convince me, since it’s somewhat complex and the results aren’t too good either. Still useful to see however:

Otherwise, I saw a non-Blender video about how body tracking using markers and standard footage is achieved by the big studios. The actor dresses entirely in black, while wearing white dots over their body. The software transposes those dots to the armature bones, resulting in a fully accurate animation. Again I’d like to know in detail how such is done in Blender, if there are any tutorials.

Lastly, I saw a video about how spatial sensors can be used in Blender… the model in cause called “YEI-3 Space Sensor”. Such sensors accurately detect their position in space and report it to the computer, allowing a sensor per limb to let an actor easily animate their character. But that video is only a description… does anyone have a full tutorial? Also, what are the cheapest sensors of this type in existence? The YEI ones are about 300$ a sensor… I’m not insane to buy 15 of them even if I could miraculously afford it :stuck_out_tongue:

Learn how to do camera tracking, object tracking, and 2d tracking; and learn about drivers and stuff. And that is pretty much all you need to know (besides the rest of the stuff you need to know to work and animate in Blender).

edit: though depending on your approach, you’ll probably need to understand trigonometry as well.

I know how to do camera tracking. Object tracking I didn’t learn yet but will get to. I’ll also learn about drivers later.

It does however seem to be more than that. Apparently there’s already an addon (included in Blender but disabled by default) called “Animation: Motion Capture Tools”. I looked up a tutorial for that and found a video. But just when I was hoping it would show me how to do my own motion capture animation, it said that such captures need to be imported from other files (bvh format). Great :mad:

I assume this is a question unrelated to Blender, but still asking just in case: If Blender doesn’t have its own bvh generator, where can I get a free and Linux native program that can generate motion captures, in a format importable in Blender with this addon? Or does this addon work with Blender’s internal features… I am confused there.

That addon your talking about lets you generate bvh file format if you want to export your animation as BVH animation. You just have to go --> export. Make sure the rig is selected.