Mythbusters spoof WIP

Discovery channel has eliminated the deadline for submitting spoof videos so I’m reworking the Mythbusters spoof I did in Oct.
After playing around with driven shapes I’ve opted for using the Action Editor so I can also use Papgayo and BlenderLipSynchro.

BlenderLipSynchro always messes up the phoneme (viseme?) assignments. I had to assign the mesh shapes to the Papagayo phonemes, e.g. ai=ai, e=e, etc., then scrub through to figure out how BlenderLipSynchro rearranged them and undo and run BLS again with the adjusted assignments, e.g. ai=w, u=o, etc=e

The result is not as smooth as I hoped, but it’s pretty good for a semi-automatic process. The sharp transitions are most likely caused by my phoneme shapes. I used the Blair set found on this page as a reference:

C&C welcome.


Er… It’s… Way too jittery, it looks like someone stuck a pair of chattering teeth in his mouth. You’d probably be better off hand-lipsyncing and using less rigid shape keys.

It is too jittery, I agree. Hopefully there is a way to calm down the results of the BlenderLipSynchro script.

After looking through the (which is mostly in French with some English comments) I think I found a possible solution. The script creates points on an IPO curve for each phoneme at full strength, 1, then makes a 0 strength point 2 frames before and 2 frames after. I’m going to try adjusting the strength of the phoneme point and increase the 0 points to 6 frames before and after. I’ll post the result after it’s rendered.
Has anyone gotten BLS to work well? With a few tweaks it could be a real time saver.

for doublet in liste_frame:
    if (doublet_old==""):
    if (doublet_old!=''):
        if (dico_correspondance[dico_phoneme_export[str(doublet)]]!=dico_correspondance[dico_phoneme_export[doublet_old]]):
            print "doublet:"+str(doublet)
            print "doublet old:"+doublet_old

<edit> I think my visemes are too exaggerated. After tweaking the settings for BLS a few times I’m still not getting the results I expected. The shapes will have to be reworked.

Adjusting the frame displacement and strength didn’t work. I’ve scaled back the mouth shapes, remade the rest position and erased overlaping shape key frames produced by Papagayo. The result is still a little jerky, but overall I’m happy with it. On to Adam.

I forgot to mention, by the way, that I do actually like the model, the animation just caught me off guard.

BlackBoe: Thanks. No offence taken on the animation; the chattering teeth simile was a good one! Papagayo tries to put in every letter of an entered sentence and that’s too much. Next time I may try just entering first letters and o’s.

I going to rewatch parts of “Barnyard” and “Ice Age” for inspiration; the mouth movements match well to the audio without being too busy.

The lips don’t appear to close for the M and P sounds.

Yes, it’s a very nice character. How did you do the beard?

There’s a tradition when animating charactors with moustaches to lip-sink the moustache rather thn the mouth. Think back to al those cartoons when you couldn’t see the guy’s mouth, but his moustache wiggled as he talked. However, having got so far with the mouth, it would seem a shame to stop animating that.

The B & P sound require the mouth to shut, but you hear the sound as the mouth opens.

Your tests confirm a concern I’ve had for some time with automated lip-sync. The trick with lip-sync is to make the obvious sounds visible and to almost ignore the less obvious sounds but the scripts don’t work that way by default.

Your second test is slightly better but mainly because of the softening of the result. The mouth is still moving a lot though.

Just try speaking the line yourself, with Jamie’s accent if you can, and see how little your lips really move. Sometimes a word needs only one viseme in one piece of speech but the same word elsewhere in the speech might need two or more visemes - it all depends on what other words surround it. We don’t actually make the shape of every letter or every word - we just make a bunch of sounds that are perceived as words.

Keith Lango makes this point very well in his notes on lip-sync (<Yes, that’s a link to his site) when he makes the observation that the visems for the following two sentences are virtually identical:
1: “I love you” and 2: “Elephant Shoes”

Try speaking them yourself. Keith suggests using a mirror but I find speaking them without making any sound proves the point quite easily. Lip-sync scripts would probably animate these sentences quite differently.

I like the head and eye movements you’ve used and I generally argue that these are far more important in selling the speech than the visemes are so if the other moves are right, the lip-sync can often be quite subtle.

yogyog: Thanks. I’ll check the m’s and p’s. The python script doesn’t assign the shapes correctly so you have to make a crib sheet, e.g. w shape = l phoneme, and I may have messed up.

I did the beard and moustache separately. The beard is static particle hair emitted by the head mesh and limited to a vertex group. On a separate layer are two elongated cubes on either side of the nose that emit the moustache . The shape of the moustache halves is control by curve guides (that’s why the moustache is on a different layer, so the guides won’t affect the beard). The moustache and the guides are parented to the same head bone as the head mesh. See below. For fun I weighted the lower ends of the guides and applied softbodies to make the hair swish around a little.


AndyD: Wow! Comments and a little praise from the master himself. Thanks!

I should take a lesson here; the head and eye movements were done manually and they’re better than the “automatic” lip-sync. Papagayo might best be used to create a text file map for lip syncing, e.g. m - frame 32, o frame 38, etc. then use that as a guide to start manually lip-syncing.

I’ve also learned not to alter the resting mouth shape too much; small changes in shape can convey the letters/sounds.

Hah, in college I used to sign letters to my wife (then girlfriend), “Elephant shoes.”

Below is a link to the manual lip sync version, no Papagayo and no python, just scrubbing through and setting the sliders in the Action editor. Granted, I was familiar with the audio clip, but it took less time for me to set the shapes manually than it did to make them using Papgayo and messing around getting BlenderLipSynchro to work. <sigh>

You need to do one of Adam saying “I reject your reality and substitute my own!”:smiley:

[Edit] Just an idea for the movie: First have adam and jamie saying “Remember: Don’t try what you’re about to see at home. We’re what you call ‘experts’.” Then cut to them making some stupid mistake, like one episode I saw they fed hydrogen gas straight into a car engine’s carburator (sp?). It worked fine for a bit, but then there was a sizeable bang. Then they decided it wasn’t such a good idea.:smiley:

Or just Jamie movin’ his moustache :smiley:

it’s a good likeness.

LORJ: I did the disclaimer in the original and will definately include it in this version, too. I’m thinking of cutting something really outrageous; it is animation. Maybe I can use Blenderist’s idea and have Jamie trying to fly around by flapping his moustache <grin>!
I was already working on Adam. That quote is one of my favorites, but every example I can find has theme music playing over his voice. I chose one from the Bloopers show, “Ping pong balls plus otter…”

Booga: Thank ya’, thank ya’ verra much.


Ok, what about, “Am I missing an eyebrow?!” Also for the disclaimer thing, I believe Adam once got his lip sucked into a vacuum cleaner motor. Homemade hovercraft episode I think.

[Edit] Just thought of another gold quote for Jamie: “Quack, damn you!”

LOTRJ: I found a pretty clean clip of the famous “reject your reality” quote and did a test with it. I also tested his hat and put some lenses in his glasses.