Facial rigging - What I've learned on Durian

I am getting an ‘access denied’ error when I click that link… am I doing it wrong? :slight_smile:

Attachments


Sorry, I dropped it in the wrong folder, try it now & see if it d/l’s. If not I’ll put it up on Vimeo for a while.

@chipmasque
Thanks, it worked :slight_smile: Looks really good! You’ve got some really subtle movements going on in the upper lip that I really like. I tried a realistic all-bone rig once, and didn’t get nearly as good of results with it. Well done!

@all
One of the things I really prefer about the bone/deformer approach is the ease of setup. I’ve always struggled with getting shapekeys to look just right, and if there are 20-30 on the character, that’s a lot of work. Add in the fact that once you’ve done all that work on one character, it is not easy to just transfer it over to another one - you usually have to start from scratch!

Here is my proposal which I’ve made over the weekend in a video and a .blend

The rig isn’t as polished as it could be, but I hope it clearly demonstrates just how powerful the meshdeformer can be. The bone configuration is still a wip, and a lot of the weights could use touching up, but I’m fairly pleased with the results I got out of it.

I already have ideas on how it could be greatly improved… like increasing the resolution of the meshdeform cage, and adding extra edges to give more refined control. Especially on the nose, and around the eye sockets/eyebrows.

I’m also wondering if it would be a good idea to create a shapekey based rig using the meshdeform cage… imagine if you could setup your shapekeys there, just the one time, and then just transfer the setup over from character to character.

With all the focus on auto-rigging in 2.5, I think this approach has potential, if not just for quick setup on background characters. Though I’m very certain the results can be good enough for leading characters… this is just my first run through :slight_smile:

BTW the transfer shown in the video took me about 30 minutes to perform. It could have been less, but I ran into problems getting the binding to work correctly on the mouth. If you are going to use this rigging method, it is important to have the mouth of your character fairly wide open.

As Cessen stated, be careful using the Mesh Deform modifier. It has an inherent weakness: the closer you get to 180 degree rotation from the rest position, it tends to “crush” the underlying model. Depending on how you exactly build your cage, it might not be too bad, but it’ll almost certainly screw up the way that the eyeballs fit in their sockets.

The reason I like to use shape keys is that I create them with sculpting, not edit mode (unless I’m rotating). I find that interface a much more intuitive way to create a realistic expression. It also allows you to easily go from 0-100% on the expression as you work it so you can see how it moves.

When it comes to expression portability across characters, I take my cues from The Incredibles. All of the non-hero characters in the whole movie were just morphs of a single character. That means that by and large they only had to do one master set of expressions for hundreds (thousands) of non-hero characters. For the hero characters, you should be creating their expressions (and by extension their personalities) by hand anyway I think. If we really want to worry about transferring expressions across diverse facial characteristics, a much better way to do it is with “facial tagging points,” where you define a set of reference points on the source face, again on the target face, and then interpolate the expression. No matter what system you use (bones, shape keys, etc.), you’re always going to have to tweak the areas around the eyes and teeth to fine tune when you transfer your rig to disparate geometry.

Using the meshdeformer on just the face I don’t see encountering the 180 degree rotation problem. The range of movement for each part is minimal, and the cage never really moves from it’s starting position. The eyelids are controlled by the bones, which keeps them from deforming oddly. How mushy the eye sockets get is dependent more on your cage resolution and weight painting in my opinion, it should not be assumed that they will always deform incorrectly.

Sculpting sounds like a good way to make shapekeys, I have yet to learn it :slight_smile:

The idea of morphing multiple characters from one is not new to me, and is a good approach. It does require a certain workflow though, and also requires that the characters are similar in appearance. I’m not really clear on how you are suggesting ‘facial tagging points’ should be done in Blender? It actually sounds like what I’m doing with the meshdeformer, the vertices are positioned in key areas on the first face, and then moved to the corresponding areas on the new face.

I have a question: does anyone know a good way to do sticky lips in Blender?

I think that might be a matter of preferences. A complicated bone->driving->deform mesh->driving underlying mesh would be just as hard to transfer over, IMHO.

And, as Harkyman points out, if the underlying mesh is the same, then Blender 2.5 has a transfer shapekey function that works really nicely. I agree, that there are limitations (the underlying mesh must be the same) but, honestly, a deform mesh such as you have in Jim Carrey won’t port over to just any mesh, either. And at least you have something with shapekeys. With bones, you are on your own.

So… I am not sure I buy the “It’s more portable” argument. I do buy the “I like bones better” argument, because I think everyone should use what they feel comfortable with… one of the great things about Blender is it really doesn’t force a workflow on you…

I agree with this… I love using the sculpt tool. Even if I am pushing points around, I prefer using sculpt’s grab brush.

However, having said that, the lack of falloff on a shapekey is not just a little problem. It is so easy to create a shapekey that just doesn’t play well with others.

If I had a Fall Off brush that allowed me to blend out the edges of my shapekey, I would probably never even consider using anything else… But using shapekeys do have issues as well…

Quick OT question:

What’s the proper way to combine two shapekeys? I expect there’s a better way than duplicating and combining the geometry itself.

Bunny: There’s a tool available in the Mesh specials menu (W-KEY in Edit mode) called Blend from Shape that might help you out.

bunny: In 2.5, you can just set both of the shape keys sliders to get the blend between them that you want, then press the “+” to create a new key. It’ll be whatever was shown on the mesh, all combined into one.

MarkJoel: Yeah – part of my point was that it was preference, not that I think the approach is perfect or the best for everyone. As for masking shape keys, you can do that. Create a new empty vertex group. Use that group as the “mask” for your shape key – the key’s effect on the mesh will seem to disappear. Then, go into weight paint mode and start to paint in that vertex group. As you do, the shape key will be unveiled.

So, you can use the vert group to mask any of your full face keys. Also, you can bring up a key, use the vert group, then hit the “+” button to create a new unmasked key the resembles the previous masked one. By creating several well-blended vertex groups you can use this technique to break scultped full-face expressions into a series of nicely overlapping individual keys for the different regions of the face.

chipmasque, harkyman: Thanks! I needed to combine a bunch of asymmetrical shapes to get joysticks to work.

FGC: My all-bone test rig got so big (dozens of tiny bones) it might as well have been shapekeys. It took an Action constraint on each affected bone for each shape, so by the time I got to, like, 12 shapes, it became too much to keep track of without getting the whole the plotted out and organized properly.

I jury-rigged sticky lips (with bones) by having two mouth shapes for the opened jaw, ‘Open’ and ‘Closed,’ and a control to blend between them.


http://vimeo.com/10075508
Testing joystick controllers using Sintel as a guinea pig.

Using the ‘Distance’ driver variable makes this much easier than 2.4 (where you had to roll the bone in Edit mode and thereby couldn’t cleanly constrain the control within the square). Plus, setting up similar controls is a matter of just copying the driver and changing the target bone names!

Hey, I know its been like a year since anyone has posted on here but I have a face rig I’d like to contribute. I’ve been looking at this forum for the past few months and its been a great help to me. There’s a lot of great examples here, I love the Jim Carrey rig.

This is Cartoon Guy, he’s on blendswap here:
http://www.blendswap.com/3D-models/characters/cartoon-guy-v1-0/

I used bone controlled lattices to move the mouth and then used action constraints to more easily control those bones. Another cool thing is I used driven copy transform constraints for the tongue so it sticks to the roof of the mouth while lip syncing. Its kind of similar to the technique Nathan Vegdahl used for the jaw, which i also used. Let me know what you think.