Best low-cost lipsync option for Blender?

I searched the forums and decided to use Blender script (included in v2.42) and Papagayo to do lipsync. However, I discovered that BlenderLipSynchro won’t work unless there are only phoneme shapes. In other words, if I also have for example a smile shape, eye close shape, etc. then BlenderLipSynchro gives the error message:

“Please select the mesh object and create all the IPO curves for your shape” and on the console window it says “not the good number of IPO curve”. This means I have to remove the other shapes?

Am I using this correctly or is there another preferred way to create the lipsync?

“Preferred” is up to you. I choose to lip synch directly in Blender.

I load the speech track in the sequence editor and then use it to match the shape key animation

(I keep a timeline pane in my animation screen layout. It has a speaker button that allows the sound to be heard as I scrub* the animation bar)

*Scrub: Dragging the current frame indicator in the timeline (or IPO or Action) pane backwards and forwards to see the effect of an animation in teh 3D window.

AndyD is the reigning expert on lip-sync using only Blender. Check out his excellent Lip Sync tutorial in the wiki.

I read AndyD’s tutorial in the Wiki book - very good. And I have read that the best lip syncing is done manually by scrubbing and the shape sliders but I also want some quick and dirty lip sync. The BlenderLipSynchro/Papagayo solution does work nicely if you only have phoneme shapes. I have also seen the Magpie Blender script buy Magpie is not free I guess. Am I using the BlenderLipSynchro/Papagayo script wrong?

BlenderLipSynchro could be useful if you have a lot of dialogue to do. I had the same problem. I believe I got it to work (some weeks ago) after several tries, by creating a new action first. (ADD NEW)
You’ll probably want to tweak it when you see it, as AndyD explains it in his tutorial.
Good luck and let us know if it works for you!
Harry

One day I really should look at these software/scripting options and see what I’m missing. I’ve assumed (without bothering to find out) that you still have to tell Papagayo or Magpie or (damn, I can never remember the name of the Java one… Jlipsync or something like that) where to place the shapes and it produces a timed text file that you then feed into Blender as a script. OR are these softwares more intelligent than that and actually work out where to put the shapes for you while you go and have coffee? Do they just “read” a text file and determine phonemes from that? Are they able to match these shapes with the audio in order to set the timing?

If my first assumption is basically correct, then I’m lost as to where the time saving is??? In the same time you sit and tell Papagayo or whatever where to put the shapes, you could set a slider in Blender and have the shape set there and then - couldn’t you? You still have to make the basic phoneme shapes either way - but in Blender you have complete freedom as to how far to push each shape as you go - after all, one “O” is not always the exact same shape as every other “O”, it all depends what comes before and after. Plus, for some characters you’ll get by with four or five basic shapes that you can combine on-the-fly to create a whole range of useful shapes whereas with an automated solution, such combinations are presumbly not possible and you’d be required to provide shapes for each phoneme you need.

Of course, if my assumptions about the third party helpers are way off the mark then feel free to ignore my ignorance :wink: I just find the lip-sync process so straight-forward that I’m finding it hard to believe there’s an easier way :eek: Maybe I just don’t want to believe it.

(“reigning expert”??? Wow! :slight_smile: )

I’ve assumed (without bothering to find out) that you still have to tell Papagayo or Magpie or (damn, I can never remember the name of the Java one… Jlipsync or something like that) where to place the shapes and it produces a timed text file that you then feed into Blender as a script

This is how Papagayo and Jlipsync work - you have to manually move the phrases, words, and then phonemes into place. I have never made a good looking lip sync yet with automated tools - the phoneme transitions are always too sudden so the mouth flaps too much.

I got it to work (some weeks ago) after several tries, by creating a new action first. (ADD NEW)

Ok, I will try this out. When I get this to work, I will post an animation here of my best Papagayo/BlenderLipSynchro attempt and maybe we can see how MagPie, JLipSync, and manual methods compare?

Andy & Eric,

I have preferred Magpie with an import script, the benefit being that the script can set the prior phoneme to zero, when it reads and sets the next point. (or the script can be set to pre-determined lag, blending overlapping sounds.) True, I then review the output and tweak as desired. I’ve never had great luck with scrubbing and getting a ‘normal’ sound.

Do you use a lighter model during lip sync? I try to use just the mouth, with most everything else moved to another layer.

Regards,
Mike

I believe I got it to work (some weeks ago) after several tries, by creating a new action first. (ADD NEW)

Harry,
did you have more than just the phoneme shape keys? I can also get it to work if I have only phoneme keys. But if I have other shapes like smiles, then it says to select the mesh and create the phoneme keys. I looked at the BlenderLipSynchro script and it checks if the number of shapes is equal to the number of shape IPO curves created. So if I create IPO curves on all my shapes, including the smile, then the BlenderLipSynchro interface comes up and reads in the Papagayo .dat file but it assigns phonemes to the smile shape which is incorrect. I am thinking that the script may need some altertation. I wonder if it is still being worked on.

Hello Eric

I did actually work on a complex model with many other shape keys. I was initially concerned when the program worked flawlessly with only the phonemes, but not when I added the others (happy sad etc). The keys would end up in the wrong channels.

I made a backup for testing puposes and I somehow got it to work by starting with a new action, and the “Hello world” clip. Only the phonemes got keyed this time. I haven’t needed to do an actual production scene yet, but now I am convinced it is possible to make it work. I’d show you the test scene but it’s not ready for prime time yet. :wink: It wouldn’t be of much benefit anyway.

If it gets too hard to use BlenderLipSynchro, the manual way is probably the best, and as Andy says it doesn’t take too long. I was just determined to have the option to use BlenderLipSynchro and it really does work. The result might still require some tweaks, and I may later choose not to use this tool if I don’t like the results.

Don’t be afraid to experiment!

Harry

I did actually work on a complex model with many other shape keys.

Hmmm, maybe creating the new action creates a container and then putting keyframes on just the phonemes makes the script think it is only seeing those shapes so then #shapes = #IPO curves. Then, all the lipsync phoneme keyframes will be stored in the new action so it can be made into a NLA strip. I will try this out.

http://www.lostmarble.com/forum/viewtopic.php?t=5201

[

http://www.lostmarble.com/forum/viewtopic.php?t=5201
](http://www.lostmarble.com/forum/viewtopic.php?t=5201)

I tried to download PapagayoMod2.1.rar file but cant get past the web page that says:

Download the file!
You have requested the file: papagayoMod_1.2.rar
Size of file: 4.5MB
File has been downloaded: 6 times
Description: PapagayoMod v1.2

I setup an account with the download service but still it doesn’t download.

Nevermind, at the bottom of the page I clicked on a download thing and it looked like nothing would happen but then it said it would take 30 seconds to start downloading the file. I got it.

OK. I have made a few changes to the BlenderLipSynchro script and attached it here. Now the script allows the your model to have shape keys for smiles, raise eyebrows, frowns, etc. and still work with lipsync. To use the script, make a backup of your current BlenderLipSynchro.py script and copy this new one into the scripts directory. Select your character and create a keyframe for each of the phoneme shapes at time 0. Then run the new script. I am not a Python or Blender expert so use at your own risk. Play with it first. I have only tested it on a .dat file generated by Papagayo and only 10 phonemes (Preston Blair set).

http://markopuff.com/animations/blenderLipSynchroEH.zip

lip synch still seems too manual … we could automate this even more with step 2 and 3 below…

  1. create the characters usual 6-8 mouth shapes for the different letters of speech

  2. use existing “voice recognition software” (and they exist with 99% accuracy which will definitely be good enough for most lipsync) to generate “tokens” (with time info) from the audio track. These can be stored simply as an array of the shapes of point 1) and at what time they occur

  3. simply convert these tokens into a blender “NLA phenom strips” automatically.

Have you checked out how to make “false” lip sync in DAZ3D using Puppeteer?



Perhaps something to implement in Blender?
:evilgrin:

^
So, Toldi, Can you give a more specific explanation on what you said, i want to know it. better if you can give me a tutorial on how you do the step1 and step2. im really interested