grrr really struggling with this lip sync exercise… I’ve read stop staring and I’ve read keith lango’s notes on lip sync but it still hasn’t clicked yet. Any comment crits to help me improve are very welcome. Would especially like to know if you have any good methodologies, and what kind of rigs you prefer - shapes or bones. I’m relying heavily on bones at the moment for everything as push and relax have become my new favourite blender tools, and Alt-O in the graph editor doesn’t seem to smooth the way I would like. Also do any of you lip-syncers find being able to lip read at all helps? I have quite bad hearing and I’ve started trying to learn just by watching videos of people speaking with the sound almost completely off, but yeah… thats not clicked yet either!
This second piece is a one person acting exercise… a guy waiting for the bus. My animation usually ends up being too slidey-smooth so I tried to make this a lot snappier… how does it come across?
I spent ages going back over the arcs but I don’t know if it really helps or just gives animation which is too heavily stylised?crits please!
also thank you feelgoodcomics for the nathan rig.
I can say that I’ve been done a similar path before trying to learn to animate, and feel that only now some things are starting to make a bit of sense, though there are things I still don’t seem to have a good handle on yet.
In terms of methodologies, there are several things that I’ve found that help a lot:
- Keep it simple - In recent times, I’ve been finding that I was tending to try and squeeze too many poses into too short a time, which ultimately seems to backfire (removing many of those poses, or just spreading them out over several times the original space usually improves the situation). So, cut back on the number of poses that you sent initially while getting timing sorted. Besides, real beings need time to actually move (a guideline I’ve read is about 5 frames for a pose/action to “read”). TBH, I’m still quite hit-and-miss on this still… it seems to take a lot of experimentation to see whether it feels right.
- I currently follow the “Jaw-Width-Shape” method of lipsyncing. Basically what I mean by this is that I start by clumping the words into spoken “chunks” (using markers; I’d recommend animating in the action editor, using “local action markers” for this). For each chunk, I then go about mouthing the line of dialog at the in time with the recording, with my fist under my chin. During this process, I pay attention to the following things:
- when (for which chunks) does the jaw move, and/or where does it open most
- where are the corners of the mouth? Are they stretching out, or snapping in, or just staying put?
- finally, for “shape”, do the lips pucker out for a “o” or “u” sound? Is there attitude to the speech which would suggest that perhaps the whole mouth should be skewed off to one side (remember that this includes the jaw rotation too if you go down this route)
- After observing some reference video, I’ve come to the conclusion that in general, mouth shapes are held for at least 2-3 frames (for fast speech). If things start flapping/flickering (unless you’re going down the Aardman route), repeat #2, try lumping together more syllables (i.e. pay closer attention to when the jaw is moving). You’ll probably find that actually, most syllables are more “internal” than external - i.e. tongue related, which will probably simplify a lot of things if you don’t open the mouth that wide or don’t have an extreme close up
- Consider what the full-body + whole face are doing on key words, blocking those out first before touching the mouth. Probably a bit harder to do with a newsreader character, but I’m sure there’s still stuff you can do there…
- Supposedly snappy motion = heaps of “holds” with quick+ transitions between poses, and with extremes either side of the transition for higher energy. Well, at least that’s what Keith Lango says IIRC, though I’ve yet to make this fully work still (refer to #1 again). But at least the idea that things aren’t in constant “fluffy” motion is useful to keep in mind.
- Regarding “holds”, it seems that just copying a keyframe is usually a bad idea, as people then say you need “moving holds” otherwise it looks stiff/dead. But the thing with moving holds are that they should be subtle movements, not really new poses. One tip I’ve seen and have been using is to firstly copy the keyframe to form a hold, then step one or two frames forward (assuming non-constant interpolation), insert a keyframe there, and move this new keyframe over the tail end of your hold. This “seems” to work I guess.
Now, for crits of the animation as you asked for.
- That last pose seems out of place, like it’s “unmotivated” by anything, as some might say. Perhaps it’s just the timing of it (too quick)?
- The staging just feels off. It seems like you’re trying to do a “lying down” shot, but it doesn’t look that clear.
- When the head moves around 0:02-0:03 and around 0:05, it looks like the neck should be involved as well
- Try to do something with those arms too. The shoulder raising or whatever near the end looks a bit weird
- Overall impression is that it all looks a bit “flat”, though perhaps the clip sounds a bit like that too…
- That first arm move around 0:02 looks a bit weird - too floaty perhaps?
- When I first viewed the clip, the standing around seemed too short (blink and you’d miss it), but checking on it again now, it seems kindof ok.
- Perhaps you could build more tension by dragging out the “foot-tapping” part and the “standing and looking” parts?
- Hmm… now I want to go and try this out too
Thanks so much for all those tips ali!
Will definitely try the fist on the jaw approach and probably a thumb and a finger on either side of my mouth as well to try to find the width!
Managed to leave both of these two blend files at college but will have to dwell on it for a bit. I think doing a lying down lip sync shot was just a terrible idea, though the ‘famous last words’ I got given to lip sync sort of warranted it. I guess I should probably have put the camera next to the bedside seeing lip sync in the side profile but given I can’t manage lip sync straight on yet I thought that might be too risky! Neither FK nor IK arms seemed to work at all lying down, and with the FK spine it kept feeling like he wasn’t really resting on the bed at all. I might redo this as a looking into the mirror bent over supporting the body on the arms type shot, as there are probably limited times when I’ll have to animate people lying down!
With the second clip I’ll definitely speed up the first arm motion and extend the intro so you get the idea he’s impatient. Maybe I should add a part where he’s standing but less upright so his happy expectant bent back standing pose quickly turns into a depressed stand which gets held for longer before he drops back down.
You may find
useful for your rig-wrangling issues
Specifically in Blender, a good hint would be to either scribe a few quick arcs using Grease Pencil or just by repositioning the 3D-cursor to keep track of some key points when you need to. Certainly, I’ve done the latter on a few occasions when dealing with foot placement issues (I put the 3d cursor at the tip of the toes, and then matched the foot up to that in a later frame, resulting in no foot slippage at all!).
hey ali, yep i’ve read ‘put it there before’! I hate it when that happens though, it feels like you’ve got all this technology around you and you just have to work against it in a heavy handed way. having a go at a new piece of lip sync now with your methods… using the mancandy rig though and slightly overwhelmed by the number of different ways/bones to move the lips around! I’m always tempted to do too much of the open and close movement on the lips and not enough on the jaw. It seems that the jaw generally does much slower movements than the lips, hitting the big open and close per word and the lips seem to do much more on a syllable level.
also thanks for your Alt-O graph editor fix! I was just about to script it in python as I thought it was going to go onto the TODO! Makes smoothing over some of those jittery lip-flapping movements a bit easier.
On another topic I have a questions about the best way to get bone matrices/locrotsc evaluations on frames other than the current one and evaluating constraints from python and what is the best way to do it with regards to this ( http://www.youtube.com/watch?v=jTgAcXe1f1A ). Not sure if you or campbell would be the best one to ask, but would be great to clear up some of the hackiness of my 3D fcurves script, and find out what is and isnt planned for future development
im wondering because it takes a lot of work, to lipsync using tools like papagaya and any other app etc,
would it be possible to record audio in blender with a microphone and a blender internal addon makes the mouth rig move as expected in time, you know auto lipsyncing like it does in some machinima tools like iclone, moviestorm, and many games like dragon ages tools. i think if blender is capable, it will take blender to another level.
blender is capable, actually there is a proposal to add a papagayo space like in blender, and somewhat it was half done, but i don’t know what happened to the code contributor, did he released the code or just ignored it.
Really enlightening, both the thread and the links to 11 seconds club.