Facial rigging - What I've learned on Durian

Feelgoodcomics suggested that I start a thread here about the facial rigging issues I ran into on Durian. And I think that’s a good idea. So here I am. :slight_smile:

As sort of a preface, let me first say that unless you plan on doing some kind of tissue simulation, shape keys are going to have to be a part of your face rig. Bone deformations just aren’t enough to create the creases, bulges, and wrinkles of face deformations (unless you use thousands of tiny bones… at which point you’re almost just doing shape keys but with bones anyway).

Recognizing this is really important because it influences how you design the bone parts of the rig: you need to design the bone deformations in such a way that shape keys will work nicely on top of them.

Shape keys get applied before bone deformations, which effectively means that shapes are applied in the local space of the bones deforming it. For example, if you make a shape key that moves some vertices left, and then you deform those with a bone that rotates 90 degrees, then the shape key will move the vertices down. Normally this is a good thing. If you rotate the entire head of your character, of course you want a smile shape to rotate with it. But if you have multiple bones on a face all rotating in different ways, this can severly mess up shape keys. That’s not to say that rotating bones is completely off-limits in face rigging. But you have to be mindful of the “deformation space” it creates for the shape keys.

This was something that didn’t occur to me at all when I first started rigging Sintel’s face. In particular, I used a ring of stretch-to bones around the mouth to manage lip deformations (similar to my setup for big buck bunny). Unfortunately, this totally screwed up the shape keys when the mouth was open, because the lips (especially near the mouth corners) were rotating so extremely, creating a highly distorted deformation space for the shape keys.

And that’s probably the biggest lesson I learned: design the bones to create a good deformation space for the shape keys.

And this doesn’t mean “don’t rotate bones”. In fact, in some cases, rotating bones seems to create a better deformation space for shape keys. For example, a properly weighted jaw bone. And the same goes for scaling and translating bones: you have to think about how it will affect the deformation space for shape keys. Sometimes scaling a bone might be good, sometimes it might be bad. It depends.

But design the deformation space with that in mind.

At this point I don’t yet have any really good answers for what a good deformation space is for a face or how to construct it, because I haven’t had the chance to play around with different setups (which would take time, which is why we abandoned it for Durian). And I’m also pretty sure it depends on some of the larger design decisions of the face rig: if shape keys are just used for small details and corrections, then the ideal deformation space may be quite different than if shape keys are used for larger deformations.

But it would be really cool to explore that (among other things) in this thread.

Something else I would like to explore in this thread (which I have very little experience or inspiration about) is control schemes for faces. This is also something that is influenced by the nature of the particular face rig. I suspect the ideal control scheme for a very realistic face may be a bit different from the ideal scheme for a very cartoony face.
In many ways, I feel like the control scheme should be the starting point–not the ending point–of designing a face rig. It’s how I approach most other rigging: figure out how I want to interact with it, then figure out how to accomplish that. But I’m lacking good ideas on that at the moment, so any brainstorming would be appreciated, and any examples we can find and share would be awesome.

I noticed that your Sintel rig has each key in a single control. A lot of the keys could be combined into fewer controls, such as the mouth corner, or an eyebrow. Those keys don’t need individual control, as they’re mutually incompatible.

Hi Nathan :smiley: This is exciting!

After seeing your blog post last night, I started digging around for face rigging ideas. I found a multi-part tutorial for XSI by a user named Pooby that gave me an epiphany. I have never used XSI, but I understand the technique he uses. He basically creates a nurbs surface and constrains a large number of bones (pretty much one per vert) to that surface. Then he uses shape keys to move the surface into the general expressions (bulk massing type of thing). Once the basic shape is set he goes onto the mesh and creates a shapekey to refine the details and correct the shape a little. What he then does is really interesting, and I hadn’t seen it before! He drives the ‘corrective’ shapekeys on the mesh itself with the proximity of the bones on the nurbs surface! So the nurbs surface manipulates the bones, and as the bones get closer (say the corner of the mouth with the middle of the cheek for a smile) the corrective shapekey kicks in automatically, adding wrinkles and subtle shapes. This seems far better to me than hooking the corrective shape up to a bone controller, since the shapes seem to blend together automatically/naturally.

Well as you know Blender can’t do this surface constraint type of thing without 2 armatures, and it’s a nightmare to do. I already had a facerig on the go (of Jim Carrey) in which I was experimenting with using a meshdeformer and realized that it could basically do the same thing!

Basically bulk massing is done with the meshdeformer and then corrective shapekeys are added onto the mesh. So the shapekeys are very subtle, and the movement throughout the jaw (because of the meshdeformer) is very smooth and refined. The corrective shapekeys are driven by the scale in Y of bones in a second armature which are constrained to points on the meshdeformer (they just measure the distance between points, basically).

It would be possible to create shapekeys for the meshdeformer, or use bone manipulation. I think the latter would be ideal if we can think of a design that would work well for massing without being too free-form for the animator (maybe the action constraint would be useful here?) Because I find direct bone-driven shapekeys are incredibly rigid and tend to ‘stop’ and ‘pop’ when they bump into eachother or hit their limits. Whereas bones can be moved smoothly on the fly, so long as they aren’t just bones floating in space that can be pulled anywhere without limitation. Rotational based controls as shown this all-bone rig would likely be a good approach. Between bone/meshdeformer bulk massing and the corrective shapekeys driven by the points on the meshdeform cage I think the results would be fantastic!

The other nice thing about this approach is the transferability of it (especially if the meshdeformer is manipulated with bones). You could apply the same deform cage to multiple characters, and just need to create corrective keys for each of them :slight_smile:

I am working on a blend to demonstrate the technique clearly, I will post as soon as I have a working prototype (afternoon or evening). Hopefully you already see as much potential with this approach as I do :slight_smile:

Okay,

http://i.imgur.com/teiKl.gif

{(.blend file)}

Here is a rough example of the method I mentioned in the last post put to use on Jim Carrey. The model is incomplete, the weights are messy, the shapes were created quickly, the animation is very rough, etc, etc… :slight_smile: This was created in 2.49b so you will need to use that version of blender to view it (sorry). I find rigging in 2.5 to be unreliable and clumsy at this point in time I’m afraid, and I wanted to get this done quickly.

The controls are not ideal, but they were quick to put together. I really wouldn’t setup the gui this way, as you can see the smile control causes major popping and stopping problems. The nice thing about having used the action constraint though is that the bones can be controlled by the animator on the fly, to correct the shapes as needed. And since the corrective shapes are tracking points of the deform cage it doesn’t matter which order the controls are manipulated in, for example the corner of the mouth can be pulled up and the smile shapekey creasing the cheek will kick in automatically (as opposed to driving the shape key with a main mouth control for example). I was thinking as well that displacement maps could be used as well to create things like wrinkles in the chin…

It’s only a few hours of work to this point, and would take a few days work at most to finish once the process is familiar. After it is complete you should be able to transfer it to other characters fairly easily - just re-shape the deform cage, re-arrange the bones, and then add in the corrective shapes on the new character (though I haven’t tried this yet, so it is still theory :)).
I have now tried this and it totally works! Only a rough test though, still need to build a better quality setup to work with - doing that next :slight_smile:

I will try setting this up on the latest model of sintel released on the durian blog to see what I can get working. I think if corrective shapes are made for each of the ‘Facial Action Coding System’ expressions then all the bases should be covered.

I will keep watching this thread for further ideas… :eyebrowlift:

Wow! I like the ideas about deform cages that Feel-Good comics brought up! Excellent! I was wondering though about Cessens earlier comment about the order in which face keys and bones are applied - How hard would it be to allow a bone to be applied before OR after a shape key? It seems it would require a little recoding but may be more flexible in the long run.

I really hope that feelgoodcomics will post a full blend of the finished version and a tutorial :slight_smile:

@feelgoodcomics
Thanks for sharing that (though i’m way not at a point of ability to really use it! YET). But i’ll download it and try to understand something out of it.

FeelGoodCOmics try animating my expressions ha ha.:o

Interesting thread Cessen.

:RocknRoll:

@mrmowgli
If shapekeys were evaluated before the armature deform the result would be very undesireable. As Nathan mentioned, that is how the shapekeys turn with the head! If it didn’t, the shapekeys would maintain their original orientation (facing front) no matter what the armature does to the mesh, and that is no good.

The meshdeformer is weighted to the bones, and then influences the mesh, producing a much smoother ‘deform space for the shapekeys’ with far less weightpainting (since the deform cage has far less vertices than the actual mesh :)).

I will get to a tutorial eventually I’m sure, but I have many other tutorials to make first heh heh :slight_smile:

@Frank_robernson
Be careful what you wish for, I just might :wink: lol

@allHave you seen this? This is definitely the most accurate facial rig I have seen to date. Liam Kemp created it in 3Dsmax but unfortunately I’ve not been able to find any pictures or explicit descriptions of how the rig was created. But I did stumble upon this interview in my search, which does reveal some interesting points which I will present here (emphasis my own):

He also mentions that aside from scripting the behaviours of the mucles/shapekeys that the entire rig is made using only the built-in tools for 3Dmax.

Obviously Durian doesn’t need to go to this level of detail - nor is there time to (it took him 8 months to make!), but it’s interesting that his approach sounds very similar to the one I mentioned, using the meshdeformer and then tracking the vertices to create conditions for blending the shapekeys together. I think instead of creating a shapekey for each individual muscle though, that just sticking with some base expressions would work just fine and save time.

Just more food for thought… let me know what your thoughts are :slight_smile:

@feelgoodcomics Yes, I knew that is what he was talking about, however I was suggesting that there be an option for a bone to be applied after the shapekey, not that all bones should behave like that. Then you could use a combination of basic shape keys and then a larger deform via bones that get applied afterwards. The basic rig would still be applied before. It seems like that would be a relatively minor change and would be very flexible.

However, I think the same can easily be attained using your method with the lattice. It seems to me that the lattice was originally added to blender specifically for those kinds of rigs. (Didn’t Pixar drive the original lattice deform research?)

All bones do behave like that :slight_smile: That is the way it currently works. Which is why I was only addressing the problem of having it done the other way (bones first, then shapekeys), and why you wouldn’t want to do that. I don’t think the results would be at all what you are thinking…

The meshdeformer is essentially the same as a lattice, except the lattice can only be shaped in a cube. Whereas a meshdeformer can more closely resemble the object it is deforming, and creates a much more accurate and controllable deformation as a result :slight_smile: I believe the meshdeformer was created by Pixar as well :wink:

what would be amazingly useful is if meshdeformers could affect coordinates outside their volumes ie clothes… so you could use the body mesh (without even building a new mesh deform cage) and only go through the armature rigging once…

sorry to change the subject!

Most interesting thread! And already the first post answered my question on the blog :slight_smile:

A thing I think one has to decide upon when constructing a (facial) rig is the level of resolution/control one wants to give to the animator.

Obviously the animator should be able to produce a smile, and usually not to change the topology of the mesh or pull a single vertex to a weird place (while using the rig). So somewhere in between the two.

A very high level rig might just have a shape key for each expression and the resulting animation may look very repetitive. A very low level rig may have a slider for each facial muscles contraction/relaxation and the results could be as good as the animator but might take very long to achieve.

Pose libraries and the like will probably always be used to bring more high level functionality into a low level rig. I expect Blender makes it easy to apply both a library pose and a few low level tweaks to any part of the animation(?).

Intertwined, I think, with the question of control is the question of controls. Should the animator be sliding a smile into existence or pulling muscles all over the face?

Thus I think you cannot really answer the question of “what is a good set of controls” until you have settled on the functionality of the rig (regardless of the implementation, actually).

So, I guess I end with a question of my own :slight_smile: Have you decided on the desired capability of the face rig for Sintel (and the other characters)? It looks like you have some generalized muscles at the moment, which could work really well. I am curious as to how well one can/should be able to control the shape of the mouth…

So if i understand,
1 /a low def model is drive by bones deform the high def mesh throught mesh deform modifier.
2/ moving the bones (or slider) drive shape keys of the high def mesh.
3/ with driven texture you can add normal map to add skin details.
Did i got it?

@dagobert
If you’re talking about the jim carrey rig I posted then #2 should be:
Drive the corrective shapekeys with the Scale Y of bones in a second armature which track vertices on the meshdeformer.
**Or if you want to be more general and less specific you could say: drive shapekeys with conditions, instead of direct drivers.

The slider bones in that file are ‘driving’ the action constraint of the lip control bones, which are used to create preset positions for the mouth.

Maybe there is a simple solution that I am missing, but when I tried a meshdeform on a face, I couldn’t get the mouth to open correctly. I even tried remodelling the face with the mouth partially open, but I still found that moving the deform mesh on the upper lip deformed the lower lip and vice versa. Anyone know what I was doing wrong?

Matt

While that example upthread is pretty incredible, the way he went about it was overkill. If you take every muscle in the face and every possible position they could be in, you end up with a gigantic set of poses that no real face would ever achieve. Those muscles work in concert, and only use a very small set of the total possible combinations of motions and positions available to them.

I’m going to try something:

  1. Build some shape keys for some standard deformations (cheek bunching on smile, brows down, etc.).
  2. Add a low res mesh over the face, and bind it with the surface method of the mesh deformer

What this will (hopefully) do is to provide simple access to expressions (smile, angry, etc.), that can be further exaggerated, tweaked, and/or pushed by moving the verts of the low res mesh cage.

If that works, the next step would be to create and assign little dot-like bone controllers for each of the relevant vertices in the control mesh. That way, you end up with a two-level control system, all in one armature. The one level dials entire expressions, while the other pulls the face around free-form.

@ harkyman – I’ve done something similar for an (unfortunately) deepsixed commercial job (specific work is under NDA or I’d show it) and had the same notions as you about applying it to a facial rig, though without the shape keys – I’d thought of a two-level control structure that is all bones. I’m too ensconced in Kata right now to pursue the idea but I can say that providing point-bone controls for the low-rez deformer mesh is a good way to go – it can provide excellent deformation control over both large and small areas depending on how it’s set up.

Yeah. I want to keep shape keys involved though so you can really control creasing and bunching. If you only use a deformer style, you get some bad effects. In my opinion, it’s imperative (for non-cartoon faces) to give the illusion of the underlying bone structure.

Also, I just tried the technique I mentioned and it failed. Or at least the Surface method of mesh binding failed. It’s not working as billed. Think I’ll post a bug.

Love these threads! :slight_smile:

I think that for animation, you need a combination of bones and shape keys.

You can’t just shapekey the animation, because the transition from expression to expression will not be realistic. It’s like watching the arm movement that was done by someone using IK animation: pose to pose, it is correct, but the software’s interpolation isn’t right, and you get something that the eye knows is not quite right.

You can always tell a morphed animation. It “feels” different.

BTW, not done in Max or Maya… but this is an example of morph’s for facial animation done by Taron, years ago:

http://www.projectmessiah.com/x4/vids/gallery/taron_people.avi

I have the utmost regard for Taron, and this video is about as good as it gets… but even here there is something that feels off. If you stop the video and look at each frame, it is good. But when you play it back at human speed… it’s off.

I think you have to use bones to get into the basic pose, and then use shapekeys or lattice deform, or what have you.

Pure morph get’s you trapped in the uncanny valley IMHO.

I think if I were to use shape keys extensively I’d do it the other way 'round – use a consistent set of shapes for the basic expressive configurations, but use bones to fine-tune these and make each instance of shape key influence more unique, thereby (it’s hoped, at least) avoiding some of the mechanical repetitiveness that makes an all-shape key solution look false.

With a mesh deformer managing fairly gross multi-muscle mass motion, shape keys to define the fundamental definitions of the muscle deformation, and “finessing” bones for fine-tuning, that’s three layers of control, which for the most part respects (I think) the order in which all these influences are evaluated. In theory, at least… I’ve yet to build anything of the sort :o.