lipsync with shape keys or armature?

I have studied various rigs such as Ludwig, Gus, and others. They use IPO driven shape keys to control facial expressions and mouth phonemes. Other Wiki literature suggests using shape key sliders to animate facial expressions and mouth phonemes. But I want to do both. I want armature bones with IPO driven shapes to control facial expressions and I want shape sliders to control mouth phonemes so I can use programs like Papagayo and BlenderLipSynchro script to do lip syncing. But as soon as I create an action with phoneme shape keys, then the IPO driven facial expressions don’t work anymore. I think it’s because the shape key IPO curve is no longer active once the shape key action is created. Can someone please help me with this one?

If you are going to use a drivers to drive varies shapekeys. YOu will need to animate the drivers NOT the shapekeys.

YOu will need to animate the drivers NOT the shapekeys.

Right, you mean I need to animate the bones on the face that drive the shape keys - like the smile.right bone or the eyeClose.left bone. I understand this. But I don’t have bone drivers for phonemes such as “O” or “MPB” or “AI”, I have to animate the sliders. But when I do this, it creates an action and an IPO curve which messes up the bone driven drivers. Is there a way to have both types of actions on a character? Can shape key actions coexist with bone driven shape key actions?

I know there was some sort of problem where it wasn’t possible to mix driven and non-driven animations (can’t remember the details but I think you’ve probably discovered it). I also thought I’d read that it had been fixed in the latest cvs? May be worth a look.

Alternatively, it might be possible to duplicate the visemes (phoneme shapes) and have one driven and one manual??? I can’t guarantee that’ll work since I don’t know the cause of the problem.

Personally, I just use bones for structural movements like jaw and tongue and manual shapes for mouth and subtle muscular changes. I haven’t tried driven shapes for mouth and am unsure where the benefit is (for normal/simple mouths) since it still requires the shape keys to be made. I can see where bones would be a benefit for unusally complex mouths such as those in Andrew Silke’s Cane Toads. I guess bones would also be useful for non-speech expressions since they are interactive rather than pre-set.

Ericsh6: I have been playing with the idea of having a seperate armature for phonemes where shape drivers are related to each phoneme key. The only problem is I don’t think it would work with blender lip synchro. But on the other hand it would make it easier to lip sync without having to eyeball each phoneme for each word.

I haven’t tried driven shapes for mouth and am unsure where the benefit is (for normal/simple mouths) since it still requires the shape keys to be made.

For me it is a more visual experience. I move the bone up and down and watch the, for example, eyelid go up and down. But you do have a good point.

I also thought I’d read that it had been fixed in the latest cvs?

The latest cvs is at zoo-blender (http://www.zoo-logique.org/3D.Blender/index.php3?zoo=com) right? I will verify if it has been fixed and report.

I haven’t tried driven shapes for mouth and am unsure where the benefit is

There are three benefits in using bone driven RVKs for facial expressions:

  1. The interface is directly on the face, not on another window with sliders,
  2. When the bone is moved, the eyelid or whatever moves with the bone - you don’t see the eyelid move with a slider until you stop moving the slider, and
  3. You only need one armature action for all body and facial expression movement.

I have been playing with the idea of having a seperate armature for phonemes where shape drivers are related to each phoneme key. The only problem is I don’t think it would work with blender lip synchro

The lipSynchro script would have to be modified to interact with armature controlled phonemes. Having a second armature for the face means a separate action for lip sync again. But it is nice to have a special “face” armature though that can be reused on other models.

Hmmm, I find the bones distracting for mouth shapes so I adjust them in one window and look at the result in a second window (when I’ve tried them on Emo, for example). The effect would differ per character and I guess it depends how the bones are displayed. I also tend to do lip-sync in camera view (how it appears is more important than how physically correct the shape really is) and selecting and manipulating bones in camera view can, at times, prove a little frustrating (I’m forever spinning the view to grab and move the right eyebrow or eyelid bone).

I use bones for eyelids, primarily because the motion of the lid is circular while shape keys are linear. With the mouth, I usually use combined keys to deliver a result so immediate feedback isn’t important (for me… so far). Maybe I need to look closer at driven shapes for lip-syncing but so far, to me, it seems like extra work to make simple shapes that are otherwise easily stored as shape keys.

The one-action principle has merit and is a good reason to keep fingers etc on the one rig but I tend to leave lip-syncing till the main animation is finished so I treat it as a separate process anyway. Of course, if you want to shift the animation later, use it in the NLA or just change a bunch of keys, having all keys together in one action has great benefits.

I’m not knocking driven shapes by any means, just interested in people’s reasoning. I’m more likely to knock scripted lip-sync solutions since I’m having trouble seeing how they can deliver satisfactory results that don’t require a lot of tweaking (but again, I haven’t used them so I could be completely misguided).

The lipSynchro script would have to be modified to interact with armature controlled phonemes. Having a second armature for the face means a separate action for lip sync again. But it is nice to have a special “face” armature though that can be reused on other models.

As stated above, there are very good reasons for keeping your armature in one piece where practical. The new Bone Layers feature means you can effectively divide that one armature into various sections to keep the animation process clean and simple. Major structural bones on layer 1, finger bones layer 2, facial bones layer 3 and so on. Since bone layers work like scene layers, you can view or hide as many as you want to at one time. It’s certainly extended my sanity a little :slight_smile:

I tried the October 18, 2006 CVS build for windows XP but the problem described above is still there.

Hmmm, I find the bones distracting for mouth shapes so I adjust them in one window and look at the result in a second window

I’m sure this is universally true - I have been so preocupied with getting the mechanics of Blender correct that I haven’t actually made a serious lipSync yet so I will probably end up wanting the face controls and especially the mouth phoneme controls in a separate window or just use the RVK sliders.

The one-action principle has merit and is a good reason to keep fingers etc on the one rig but I tend to leave lip-syncing till the main animation is finished so I treat it as a separate process anyway.

I think it is time for me to start learning the Blender source code to see how things really work under the hood. I have this idea I pretty much borrow from object oriented programming called “container interfaces”. An armature is like a container for all the body parts. Currently in Blender, the armature has its action interface, the body has its action interface (including shape keys), the cloths have a separate action interface, and a face armature that is parented to the main armature also has it’s separate action interface. Because of this, there end up being 4 or 5 actions for what I think should really be 1 single action for, lets say, a jumping or falling action. The “container interface” would allow the the action interfaces of each of the objects parented to the armature to be accessed through the armature action interface. This would allow the children objects to be atanamous and modular but it makes it much easier for the human (us) to interface with it and it would make the NLA window much cleaner.

I just wanted to reference this post:
http://blenderartists.org/forum/showthread.php?t=74015&highlight=include+action+ipo

It might be the solution but have to test it out.

What a shape driver gives you, that a slider can’t, is the ability to blend multiple shapes using a single control. I find that I work much faster using driven shapes then keying the shapes directly. So for me it is worth the extra effort to setup nice facial controls, because it makes the hard part (the actual animation) a little bit easier.

Here is the result of my lipsync, hair, softbody, repeatable actions, shape keys, ipo drivers, etc., etc., etc., experiments in blender:

http://markopuff.com/animations/sober_draft.wmv

I used Papagayo (the mod version) to make the lipsync.dat file and brought it into Blender with BlenderLipSynchro (with a few python code modifications by myself). I had to manually fix some of the phonemes but I think I can make another change to the BlenderLipSynchro script to eliminate this manual step. I did figure out how to get ipo driven shape keys to coexist with shape key actions. It turned out not to be too difficult but I think the user interface could be improved to make it more natural but hey, at least the function is all there. I know the hair looks crazy but I think it is awesome that Blender can do dynamic hair with contact. The only problem is that each hair guide (with soft bodies activated) has to be baked separately - I wish there was a globally bake all soft bodies or a group of soft bodies. I think the “container interface” idea I mentioned above would work here. Whatever container the hair guides are put in - maybe an empty or maybe a group - that container would sense the interfaces that all its children objects would have in common. In the case of hair, it they would all have softbody interfaces in common so when you select the container, and go to the softbody button window, you could bake all of the hairs at once. For now, I have to unbake each hairguide, fix the hair dynamics and contact, and rebake each hairguide. Fortunately there are only three hairguides in my model but the hair movement and hair contact with scalp and shoulders would look and work way better if there were 10 hair guides. And that is when you would want to do a group bake.

I discovered a big time saver with hair. I am fixing the hair of the gal in the video in my post above by adding more hair guides. But there are too many now to BAKE one at a time. I just learned you can select all the hair guides and press CNTL B to bake them all at once. Very very very cool. This changes everything.