Face Rig? (was Multiple expressions in action strip?)

Some time back I saw, in one of the ‘making of Sintel’ videos, the animator was putting together all of the facial expressions into a single action strip. I assume that these are then attached to action constraints with begin/end setup for the individual expressions. And then probably linked to sliders in the UI, that could also be attached to corrective shapekeys.

It all makes pretty good sense to me, but I’m operating mostly on assumptions. Is this approach documented somewhere, or maybe a tutorial?

I’ve got a project that, I think, needs just this sort of setup, but I’m not really sure how to go about it without building myself a tripwire.

Pose libraries use an action strip. That is what you are referring to I think on the Sintel video that Angela made for facial controls. When you add a pose, it increments the frame in the action strip. If you use bone drivers on shape keys and either a bone facial armature or a lattice or mesh deform hooked to bones you can accomplish what you saw. In addition they added panel controls for the shapes. Very neat way to do it. I tried it but it got really messy quick, but I think that was just me.

Thanks for the reply, and the link stilltrying. I had already dug up that video again, and I think I understood it a bit better than when I watched it last.

You say your attempt got messy? Of course it did! It wasn’t anything to do with you, it’s just the nature of the thing. You gotta build something like that a couple dozen times before it starts to look elegant. :slight_smile:

This is what I’ve got. It’s a DAZ Genesis3 import…


The eyes/eyelids and Jaw were pretty straight forward using ideas I gathered from the pitchipoy rig, but I’m still trying to wrap my head around the best method to tame the rest of this beast.

I like the idea of putting transform drivers with sliders on the individual bones for granular control, then keyframing those into an action strip of complex expressions. I think I know how and where it’s going, but the exact workflow to to make it happen still hasn’t gelled in my mind yet.

At this point any kind of input is appreciated.

I don’t know if this will be helpful for you, but I produced this all-bones face rig back in V 2.5x or so, and have been using it since:


Othello face rig V2-CTRLS.blend (1.17 MB)


The rig is intended to mimic the major facial muscles and uses Transformation constraints for a great deal of synergistic control. I don’t know how it would fit into a workflow with Action Strips but it uses simple bone keyframing for its basic functionality. A lip-synch example is here: https://vimeo.com/10365672

WOOT! I just got in and turned on all the bone layers and ran the animation. I am so going to enjoy dissecting what you’ve done there chipmasque. Thank You!

Quite welcome, SkpFX, and feel free to make improvements you feel would be of benefit. The rig doesn’t have any highly “fine-grained” controls, all is pretty broad, but a lot of finer expressive detail is face-mesh dependent anyway.

BTW, I find the Transformation constraint incredibly useful, it drives all my muscle-emulation helper bones in my Universal Figures project.

I had never even seen the Transform constraint in the list. That looks like a powerful tool, and I see you’ve used it a lot in the Othello rig. Still gotta wrap my head around where and how to use it.

And your Universal Figures project looks intense. :slight_smile:

The Othello rig uses a lot of stacked Transformation Constraints, so it may not be perfectly obvious how it all works, but the gist of it is: Take one bone’s Transform value (say, a rotation) and use it to drive another bone’s transform (say, scaling). This way, for example, the rotation at an elbow joint can be used to scale a mesh for the bicep muscle, causing it to contract along one axis and expand along the other two, imitating true muscle response, but bass-ackwards – the bone drives the muscle rather than the muscle pulling the bone around. Some of the controls in the Othello rig cause two or three bones to respond like this at once, imitating the synergy of the facial muscles.

Warning! A wall of text and a lot of rambling from an old dude…

The Sintel rig was completely driven with shapekeys. Depending on the version you were looking at, it could have had custom properties driving the shapekeys in the N panel, or shapekeys driven by bones with custom shapes and drivers on the face.

The action I think you were looking at in the video was simply a set of keys going from shapekey to shapekey to test whether the shapes would work well together. Because shapekeys are linear, it can sometimes be a pain to “morph” between one another to get a good deformation.

The upside to shapekey driven facial rigs is the final shapes will be very accurate or at least as accurate as your modeling skills allow. Since you are using DAZ figures, there should be enough morphs available to create a complete shapekey driven facial rig.

A downside to a shapekey driven facial rig is you cannot change the topology in the mesh after you start creating shapekeys for that mesh without possibly breaking the shapekeys. Also, you cannot apply modifiers to a mesh with shapekeys You’ll need to be certain the mesh will not need topology changes before spending the time to add the shapekeys.

Also, shapekeys driven rigs are not reusable. At least the time invested in creating the shapekeys is not reusable from one character to the next. Each mesh/character will need time invested to create shapekeys specific to that character.

The Pitchipoy facial rig and the one chipmasque linked for his Othello rig are bone driven facial rigs. Bone driven rigs can be good if you ever need to change the topology of the mesh because they rely on bones and vertex weights to work. They can work very well and are often reusable for multiple characters.

Some downsides of a bone driven rig are they can be very complex. The standard Pitchipoy facial rig has 296 bones in it. I did not count the ones from chipmasque’s rig, but I’m betting it’s pretty high up there as well. Bone driven rigs depend on good weight painting. My experience tells me that is not a skill that most people possess.

I prefer a combination of shapekeys and bones for my facial rigs. It gives you the best of both worlds. The best example I can give of is the Flex Rig by Nathan Vegdahl, Beorn Leonard and CGCookie.com.

http://www.blendswap.com/blends/view/61707

The jaw is based on this : https://www.youtube.com/watch?v=jEQoQ5DzPMI

At the core, there is a pretty simple (if you are comparing it to the pitchipoy rig, at least) bone driven rig for the jaw, eyes, eyelids and nose. Then shapekeys and controls were added on top to add finer control over each area. The results are very good. Also, the number of shapekeys needed is vastly reduced because most of the vertices are being moved by bones.

Transformation constraints! My second favorite constraint. Action Constraints are still number one in my book. :slight_smile:

Anyhoo…I’m just rambling.

I hope some of this helps you, SkpFX.

1 Like

A somewhat historical note – at the time I developed the Othello rig, all-bone face rigs were (afaik) virtually unknown to Blender users. Shape Keys ruled all facial animation. I realized their limitations as I started building more and more of my own rigs and figures/faces. In “Kata” I used scripting to create interactive bone effects (SO very complex & finicky!). Then the Transformation constraint appeared and solved about 98.73 % of my difficulties. After adapting them to a body rig it seemed natural to attempt a face rig. Perhaps the most unique aspect of the Othello rig is that it is intended to mimic actual facial muscle actions and interactions, making it complex but at the same time very effective (imo).

“Bone driven rigs depend on good weight painting. My experience tells me that is not a skill that most people possess.” – Personally I find it an absolutely essential skill for any animator, even if they rely on TDs to produce the rigs they use, because it defines how the rig transforms are translated to the visible mesh’s deformations. This can be very useful in finessing motion details.

Thanks for the input DanPro. The DAZ Genesis3 face is (at least primarily) bone driven. They may have some corrective shapes hidden somewhere, but I haven’t found them yet. The image with the shotgun blast of bones I posted above is the Genesis3 rig.

<snip>

Okay, I just went and dug into it further.
Here’s a pic of the face rig inside of DAZ Studio…


Generally speaking the user doesn’t use the face bones for posing/animation, but rather uses dials(sliders) to control the rig. When using fbx to export there is a class of morphs(“eCTRL” - Expression Controls???)that export the facial expressions as pure shapekeys. Even exporting animations ignores the face bones, and exports facial expressions as animated shapekeys.

DAZ is using some kind of hybrid bone/morph setup internally, but it would appear that for the highest fidelity to the DAZ character it’s best to primarily ignore the bones, and rather gather and expose control of the shapekeys.

My approach to this point should be rethunk entirely. Welcome to my world. :slight_smile:

Just checked out your Kata link, Chip. Those are some great deformations! Often when i look at an animation, the horrible bends at the knees really make me cringe. Good stuff!

Thanks, DanPro. Those cringe-worthy deformations at knees, elbows, and accompanying shoulder weirdness is what drove me (pun intended) to develop all-bone solutions to deformation correction. I was aware of corrective shape keys, but as in other uses, they seemed too limiting for my projects. So Kata had correctives that were script-driven, and subsequent rigs (or rig adaptations actually, since I modified the base Sintel rig) did the same with (primarily) Transformation constraints. Now it’s moved into a whole new dimension with my Muscle System project. Since I deal primarily with figure animation, my goals have always been to use the tools to polish the deformations to a much more naturalistic finish. Armature bones and meshes are anything but natural, so they need lots of help to get things looking close to right.

@ SkpFX: That Daz rig looks interesting. I’m sure that under the Controls layer there are a lot of Transformation-like constraints at work. I considered a deeper Controls layer for the Othello rig but it seemed superfluous, and by now I’m completely familiar with it. That Daz exports its mesh deformation as shape keys is a pretty standard solution, as they are almost universally acceptable by other software, vertex (morph) animation being the oldest digital animation technique beyond simple transformations, pre-dating skeletal animation by a few years at least. But rigs rarely shift app homes as effectively, especially if they are somewhat proprietary or use special native-app functions or the like.

I’m also working on converted Daz rig using only bones. With a lot of trial and error, I’m fixing the facial bones to match the expressions, and hopefully even improve on them. My aim is to have a re-usable the rig for whatever character comes out of D|S using FBX export.

I’m using several types of constraints, there may also be a secondary bone layer to tweak minor details after the main pose has been blocked out. Another bone layer can drive some action constraints.