Lifelike details in animations

This is an issue that troubled me since I started doing animations. I could say I’m fairly good with animating characters at this day, and properly adding details and bone rotations to keyframes. But no matter how good the keyframes are, there’s always something missing which doesn’t seem possible to solve by simply inserting the right bone rotations: The life factor.

Let’s consider I’m animating a character extending its hand, holding it up for a second, then lowering it down. Normally it’s easy: At frame 0 the hand is resting, at frames 30 and 60 it’s extended, and at frame 90 it’s rested again… with potential small details in between. Yet if I extend my hand and hold it up in real life, I notice there are a lot of subtle movements. Such as my hand not having a perfect trajectory, my hand constantly shaking to some extent, and my fingers constantly changing position no matter how still I try to hold them.

Most of those details are indeed possible to add as in-between keyframes, and I already do that best I can. But after trying hard to get them right, I came to the conclusion that simply rotating bones (or using AutoIK) cannot simulate the correct movement of a body to such detail. The way the fingers and hand shake is very small and precise in reality, and attempting to keyframe that will yield some partly gibberish and absurd movement.

This doesn’t apply only to the hand and fingers. Consider yourself sitting on a chair for several seconds: There’s no way that parts of your body won’t move, slip, or shake. An extremely good animator might be able to do this convincingly via keyframes… but from experience I’m pretty sure most OK animators would have a hard time. Not to mention the excruciating amount of work it would take to simulate the exact shaking of every noticeable muscle on someone’s body, even for a few seconds animation.

Are there any tips on how to add lifelike details to animations, seemingly undoable via keyframes? Perhaps an addon or trick to automatically construct and insert movement errors in animations?

I can offer a method myself, though from what I tested so far it works rather poorly: Using a noise modifier on such bones. I suspect that adding a noise curve to the X rotation of a finger can somewhat estimate the lifelike shaking. But to properly achieve such detail, more than that is needed… and the noise curve might have to be added to more obvious bones such as the head. What are your suggestions?

It really wouldn’t make sense to put an animator through all that work when motion capture could do that. Motion capture was designed as a supplement or a tool for animators to use when faced with such difficult task. Since there is time limit and money involved in most projects, you probably wouldn’t waste time on tiny details unless it’s a close up shot. However, as an artist, i can see you want to push the limits even higher and see whats humanly possible nothing wrong with that either.

Now I’m curious to see your work. Could you post something?

Motion capture is a great thing, and I wanted to try it for some time. Sadly that means special hardware which I’m not sure where to get from or how much it would cost however. Someday I hope I can obtain a device that will let me animate 3D characters directly with my body. Now I do have a good webcam, but I doubt any animation system can use just that to track my motion.

Most of the works I do are camera tracking. I haven’t finished and posted many of them yet, and the ones I have are from the time when I was worse at this than now (not that currently I’m much better). But this is my first camera track containing an animated character:

The animation is fairly lifelike, for such a character. With an immobile face and no eyes, you must rely on gesture to convey the impression of life.

One thing that is working well is camera jitter. This covers for not animating all the little tics and shakes in the model, since the viewer does not notice their absence since the whole scene is jittering. So I wouldn’t worry that those motions are not there. This might not be the case in extreme close ups or if your camera does not move. It is much easier to introduce a bit of camera jitter into an animation than to introduce tons of minor involontary motions into your puppets.

Another thing that is working well are the large involontary or unconscious motions. The robot breathes, and works some tension out of his shoulder. Well done. A dead thing would not need to do that. One large involontary motion that seems to be missing, though, is shifting weight from foot to foot. The robot seems to be carrying his weight equally on both feet, something rarely found in life.

Finally, there are volontary motions, that reflect the inner state or thoughts of the puppet. Like when he turns his head toward the speaker, checks his fingernails and puts his hand on his hip. These motions must be planned out to mirror what the puppet is thinking. Is he bored waiting for things to get started? That is the impression I got, until he dropped the thing he was holding. In any event, these actions must be purposeful, not random. I don’t think there are any tricks to this, just acting.

The timing on the falling object seems off, though, like it fell in slow motion. Also, why did he drop it? He didn’t seem distracted just before it fell, nor does he seem clumsy.

All in all, your animation is very good. I wouldn’t worry too much about the minor involontary actions, but rather to get inside the head of your characters, and figure out volontary gestures that allow your viewers to see what is going on there.

I’m looking forward to seeing more of your work.

Camera jitter can be an option, but there are two issues with it. First is that, one might not want the view shaking all the time in an animation. Of course if the camera is represented as a hand-held device, that must be added as well… but it makes little sense if it’s a floating (imaginary) camera or meant to pivot on a rail.

Second issue is that such doesn’t work for camera tracking, because all camera movement is determined by the reconstructed video. I could work around that by using nodes, and zooming in the render output then randomizing the rectangle’s location within range. But tracking already has its own shaking and such would be overkill.

That animation with the soldier is just the only example I had for the moment. I’m making non-armored and non-robot characters too for which lifelike feel is aimed at. Overall I’m unhappy with the animation in the video I linked above, since it’s very easy to see how fake it is. But today I’m much better at fixing that.

Also, any opinions on my idea to use a noise modifier on some bones (under curve editor)? I haven’t tried it yet but get the feeling such should work. Unfortunately I think it will only function for bone rotations… would be awesome if I could combine this with AutoIK and shake the hand bone while having the entire arm adapt.

Well, I tried adding a noise modifier to the bone curves. Can’t say it looked so well, so I dropped my attempt to animate those details. The idea might work in some circumstances still. Most likely answer is that there’s no way to do those details yet… except maybe manually for insanely skilled animators.

The subtleties you talk about are not noise, they are very specific actions and reactions by a skeletal/musculature system responding to real-world physics. Keep this in mind – keyframe animation is not the way to go if you want to imitate such minute details completely. Keyframe animation lends itself to a much more distilled version of the “real” motion, both distilled and under complete artistic control. In order to create convincing humanoid animation you have to study human motion until it becomes a second language, and even then, always use references for actions you need to be very specific about. Most pro animators shoot video of themselves acting out various moves and/or facial expressions, “learning by doing” and also having a visual record to look back on if needed. They often keep mirrors on hand while at the console for facial work.

The point is that if you need a “fast & easy” way to do this kind of highly naturalistic animation, motion capture or performance capture are really the only viable methods. But you can also learn to use more stylized and distilled motion to be very convincing using only keyframes, and in many cases a great deal more expressive as well. It is a lot of work, though, work and study both.

Some important factors when animating with keyframes:

Avoid robotic motion by not having every aspect of a move land on exactly the same frame. EX: Arm swing in a walk cycle – the arms tend to lag other motions during the walk, and the joints don’t all react in exact synch. Slipping the synch in music is called syncopation, and it also applies well in animation.

Use the F-curves to introduce both syncopation (by shifting groups of keyframes by a frame or two, for example) and to build in bits of overshoot and recovery. The latter can be done by adjusting the bezier handles of the F-curves, resulting in smoother motion flow than scads of keyframes. EX: When a running character comes to a stop, the mass of the body cannot instantly halt, and it sways a bit, using the legs as spring-like shock absorbers. In cartoons this is often highly exaggerated, but in naturalistic animation it is just as important. Adjusting F-curves is an efficient way to incorporate such “wobbles” in the motion.

It is my opinion that you are getting way ahead of yourself. You’re talking about really small finesses but lack some fundamental and broader knowledge that is way more important. This is not something that can be explained in a single post but I can try to give you a tip.

I have a few friends that are going through the Animation Mentor program which, as you may know, excels at teaching the animation qualities found in Pixar, DreamWorks, and other studios. Here’s the thing. The first thing they were surprised about was the lack of stigma against video footage. In fact all the mentors do it and advise it A LOT. So do that. Film yourself going through the actions and learn from it. Note every single detail, every weight shift, every small twitch of the hands, fingers and eyes, every little thing. This type of details are not something you can achieve with automation and certainly not with random twitches of a noise function. Forget about that.

Believability of a character does not necessarily come from realism. So try to learn from video sources that you make and try to apply those same concepts. My friends say it sometimes even comes to copying the movement almost frame by frame. So film yourself, try to be natural and use that information right on your character animation.

One thing that people do a lot that often gets overlooked in animating a standing character is shifting weight from one foot to another. Standing perfectly still gets to be uncomfortable.

Steve S