This has been sent to bf-committers mailing list and posted on blender.org, but I know some people hang out here and rarely there, so I’ll post it here, too.
With work about to begin on reconstructing Blender’s character animation, I have been working on an end-user character animation proposal. I’ve ploughed through a lot of the existing character animation code, and have a decent idea of what goes on under the hood. I believe the proposal is quite realistic, as it mostly makes use of what is already available in Blender at the structural level. This document can be found in .html format at: http://www.harkyman.com/animprop/caproposal.html
Comprehensive Blender Character Animation Proposal
I have been digging through character animation systems for different applications for a couple of weeks in order to help come up with some proposals and suggestions for the rework of Blender’s character animation system. One thing that impressed me was the foresight of the original coders. Blender’s philosophy is one of datablocks, with the GUI being a way of linking and visualizing those blocks. From my evaluations of other animation packages, it seems to me that the datablocks are almost all there. We just need some help with linking and visualization.
I’ve done a few drafts of this already going point by point on the problems of the current system and how others solve those problems, but it keeps coming out as a hodge-podge. What I’ll do instead is just describe what I think would be a superior character animation workflow to implement in Blender.
1. Rigging
Ideally, Blender should include a couple of pre-made biped rigs (and maybe some quadrupeds, if anyone has them) or varying complexity. So to begin animating, a user goes to Toolbox-]Add-]Armatures-]Rigs-]BiPed No Muscles (Ill. 1). Blender adds the armature at the cursor location. The user then enters Edit Mode to adjust the armature to fit their mesh (or better yet, adjusts it parametrically).
2. Animation Workflow
The user can begin to place keyframes for their poses. No Pose Mode. The user also calls up the animation timeline, which is a simple timeline like the audio track trick people use, except that it shows markers on the frames that hold keys for the current object. A keyboard command in the 3D window lets users jump forward and backward through keyed frames for the easy adjustment of existing keys.
If no Action is targeted in the Actions window, a new action is created to hold these keyframes, and this action is added as a new strip to the NLA chain for the armature.
In the current workflow, NLA is an afterthought. From looking at other animation systems, I think that it should be the bedrock. In the current armature evaluation code do (do_all_actions, specifically), there are a lot of if-then traps for different states of NLA v. Action only animation. There should be no Action-only animation. Animation data is pulled strictly from the NLA. If the user doesn’t want to know about NLA, they don’t have to: the Action that is created when they begin animating is automatically appended to NLA. They never have to see it, and Blender never has to worry about evaluating non-NLA Actions. There are some other advantages, too, but I’ll explain those as they become relevant later.
Back to the fact that there is no Pose Mode: how do you move the whole armature object as a single piece? We would require a special bone. Call it the BaseBone or something. Moving that bone translates (or rotates) the armature as a whole. One of the other side effects of not having a specific Pose Mode is that any object can be keyframed into an Action (or, to put it another way, any set of IPOs can be saved as an Action). The only difference between bone IPOs/poses and IPOs for mundane objects are that keys for bones go into Actions by default, whereas keys for other objects do not.
So to the basic user so far, nothing has really changed, except the elimination of two steps: creating your own rig from scratch, and entering/exiting pose mode. Much simple for the user’s standpoint.
But for the power animator, this is where things get good:
3. Using NLA for animation layers
The Character Animation Toolkit (awesome demos - check them out - easy reg. required and worth it - thanks Tom) uses layers for powerful animation which seem to be a lot like Blender’s NLA could be, with a few modifications.
So, I have my first Action created, just by keyframing the armature, and it’s been entered as the first strip in the NLA window. I, the advanced animator, pull up the NLA, and add a new strip. Adding this new strip (note, I’m not appending an already made Action), creates a new Action, which my subsequent keyframes will drop into. As I add my keyframes to this new strip, I see how they blend and interact with those of the first strip, because Blender is only evaluating character animation based on the full NLA, not just on the currently selected Action. Of course, if you want, you can still pull in Actions that were created elsewhere.
Here’s where we start to see some power from NLA. Currently, you can adjust BlendIn and BlenderOut only for each strip, which is evaluated within do_all_actions. But now, with the new CA system, each strip has its own blending IPO. Bringing up the properties palette for an NLA strip not only gives you the original parameters, but some new ones as well, explained here:
Name: You can name or change the name of the referenced action from right here.
Mode: Replace or Add, as per the current system.
Mute: Prevents NLA system from evaluating this strip. In the illustration, this control appears directly on the strip’s name panel, as a slashed circle. In the example, the LegMove strip is Muted, and so has no effect on the final animation solution.
Solo: Shows only this strip’s animation, ignoring others. Shown as the star on in the illustration. In the picture below, of the NLA-baking strip, you can see that the last strip has been set to solo, so only it will be evaluated.
Color: Lets you choose a color for the strip. Your armature appears in whatever is the color (or blended colors) of the current strip. (An all blue strip with an IPO value of 1 would show a blue armature, whereas the same strip with an IPO value of .5 over a red strip would look purple). It allow you to see at a glance which strips are affecting your animation.
MatchMove: A toggle button and a drop down menu, listing all available bones. When it is activated, Blender transforms all keys in this strip so that the initial position of the indicated bone matches the position in time of the indicated bone in the previous strip. This allows you to, for example, keyframe a moving backflip that lands many units away from the BaseBone, then follow it up with a keyframed Action of the character sitting, but MatchMoving them to the characters right foot, so that keyframes of the sitting action are transformed to begin in the ending position of the backflip. This is extremely useful for chaining together and building a library of Actions that do not have to all start and end in the same location relative to the BaseBone.
Additionally, it would be cool to allow each bone to have it’s own strip IPO, meaning that you could just use the influence of the Head’s IK solver from a certain strip, ignoring the rest, if you so chose. In that case, the Head’s IK solver only would appear in the color of that strip. In fact, that’s what is going on in the illustration above: IPOs for the lower body bones are set to 0, so they do not affect the lower body. You can see this at a glance by comparing the color of the armature bones to the colors of the strips.
This way of using the NLA system would be extremely powerful, and the additional of optional colorization would make the it much more accessible to users.
The final tool for inclusion in NLA is a Bake tool. If the user is happy with character animation being created in the NLA, he (or she) can Bake it into a new single strip Action, which is automatically set to IPO 1 across the board, and to Replace mode. Constraints could optionally be baked into straight IPO data, or be left live, at the user’s discretion. The user can also decide if he wants to retain the underlying strips (which won’t be evaluated anyway and therefore won’t cost any time, because of the presence of a top-level strip in replace mode with an IPO value of 1) or remove them from NLA. Once animation is finalized, this can be a big timesaver, eliminating a ton of on-the-fly calculations and replacing them with a single set of IPO transforms.
4. Keyframing Tools
In the first section, I said that the user keyframes the character animation. Here are some additions and enhancements to the current keyframing tools.
First, Influence IPOs for all constraints should be set to Auto Key as a default.
Second, reducing the Influence of a bone’s IK solver constraint to 0 should release the bones from the IK solution and allow FK keyframing to take over. How exactly does this take place? I’m not sure. Alias MotionBuilder lets you set up two full solutions, one IK and one FK, then see them superimposed and blend between the two. Here’s a thought…
When a user wants to switch between IK and FK, they don’t really want to destroy the current IK solution. What they really want to do is rotate the IK chain from the selected bone’s base to the solver, on a local axis defined by the line between the selected bone’s base and that same solver. So here’s how you do it: you don’t need to generate true FK keyframes. You also don’t use the IK solver’s influence to do it. Each bone in an IK chain has a button next to it’s name in the Edit Buttons, called FK Move. If that button is clicked, rotations on that bone also move the IK target, as though it were the (non-IK) child of the bone. You can download this simple example .blend file showing three armatures here:http://www.harkyman.com/IKFKDemo.blend The first is freeform IK, the second is IK with the IK target as the child of the shoulder bone, the third is IK with the IK target as the child of the forearms bone. So, when doing an “FK Move” rotation on an IK chain, you’re generating rotation keys for the actual bone you have chosen, plus rotation and translation keys for the IK target bone.
So that’s how you simulate FK motion while maintaining your IK solution, which is what most people want to do anyway. The FK Move button is nothing more than a keyframing tool, and not a true mode.
Third, the introduction of Driven Keys. Since that name’s already taken, let’s call them Action Sliders. You create a small action, say, the keyframed opening of a hand. First frame is closed (state 1), the second is opened (state 2). Bring up the NLA screen, select this Action strip you just made (remember, all Actions initially get appended to NLA), preform the appropriate Make Slider key command, and the Action disappears from the NLA.
Where’d it go? Take a look at the Action Sliders window. There is now a Slider there, going from 0.0 to 1.0, set to auto key, that controls the pose of the keyed part of the armature. Keys generated with this tool are applied as IPO keyframes within the current Action, not as live slider data, so there is no confusion in the NLA. If you want to change positioning that is already set, you can use the previously mentioned key commands to bebop to the frame with a key on it and reset it to the value you prefer.
If you think about what I proposed earlier, you will notice that any object that can be keyframed can be added to the NLA, and not just object motions. Materials. Light settings. Anything with an IPO. That NLA strip can then be made into an Action Slider which can be used to dynamically set keys throughout an animation. You are also not limited to a two state toggle - you could have a full battery of linear character animation put into your Action Slider, which would proceed on your character over the 0.0 to 1.0 range. Obviously, RVKs could be used with this system as well.
Fourth, the RVK creation interface. There isn’t one, really. When you make them they show up as lines in an IPO window, then show up again to be named in the NLA screen. Honestly, anything would be better than this.
5. Visualization Tools
First, onion skinning. When the user turns on the Onion Skinning button in the object’s animation buttons, you get two new sliders, Proceed and Trail. They set the number of frames forward and backward in the timeline to show ghosts of the current object. This allows you to see your motion’s flow at a glance, without actually going forward or backward in time. An immense timesaver.
Second, animation paths. Turning on Show Path for either an object or a specific bone draws the entire animation path within the 3D window. Other packages have this because it is an extremely useful for character (and normal) animation. There is no reason not to have it.
6. Wish List
Up to this point, Blender’s underlying animation structure is mostly in place to do all of this. The building blocks are there, and I’ve just proposed a new way of looking at, linking, and using some of them. Now, though, I have to address a couple of the drool-worthy things I saw in other packages.
First, footprints. CAT, which I mentioned before, uses these. Go to their website and watch the video. It’s about 5 minutes long, and amazing. I believe that CAT uses procedural motion for its walkcycles, as opposed to keyframed, allowing it to alter it’s walk on the fly. I’m not sure if this is even implementable in Blender’s current structure, as Blender doesn’t really know the difference between a foot, a hand, and a tailbone. For those of you not inclined to watch the video, here’s what happens: with a walkcycle defined, you link your skeleton to an animated empty (keyframed/path-following/etc.). CAT then moves the skeleton to the position of the empty and calculates and displays the locations of everywhere along it’s animated path that the character’s feet will fall. You can grab the footprint locations and move them. If you alter the path or keyframed animation of the empty object, you see the footprints rearrange in real time. In-freaking-credible, and a serious tool for animators.
Second, rotation constraints. From what I’ve read this is not easy to do, especially with IK solvers. The best implementations I’ve seen allow the user to apply graphical widgets to joints, specifying what kind of constraint it is: conical, ball, etc. The user then sets the limits graphically.
Third, rag doll physics. This would only come after the physics engine would somehow become directly applicable to the static side of Blender (please!). If you’re working on recoding bones and armatures, keep in mind that us greedy animators will want the physics to affect our skeletons as well someday. Create and bake a dynamically generated action from the physics and add it to your NLA timeline!
Conclusion
Those are my thoughts on the future of Blender’s character animation tools. As I said before, many of these suggestions are just new ways of visualizing and using the structures that are already present in Blender. The tools and workflow I have described would bring Blender on a par with many of the current commercially available animation packages. Hopefully, these analyses and suggestions will inspire the developers to do as good of a job on Blender’s character animation tools as they have done on the renderer, mesh modeller and game engine in recent months. Thanks for reading!
Roland Hess
harkyman
You can give me feedback on this by emailing me: me and harkyman dot com or by replying in this thread.