Even though I still feel like a Blender First Grader, I recently dove into lip-syncing a MakeHuman figure, imported into Blender, to audio. At first I tried manually animating the mouth, and found it very tedious, especially since it doesn’t seem possible to line up the audio file with the dopesheet keyframes. And scrubbing didn’t seem helpful in any way. After fumbling around for a few days, I did some research and found there are quite a few lip-syncing techniques and apps. I tried Papagayo, carefully following the bare bones instructions I found, but the results were ridiculous; not even close to being acceptable.
Does anyone know if there’'s a complete tutorial somewhere for using Papagayo with a MakeHuman Character in Blender? There is a lengthy series on youtube created by VscorpianC, but they are outdated.
Alternatively, does anyone have any opinions about Papagayo? Is it an effective app? If not, are there other automated solutions that may work better? Or perhaps there is a simpler method all together besides using an automated app? I’ve read about “baking,” but I have no feel for that. Would need to study a tutorial (or three). Any suggestions most appreciated, including tutorial links. Thank you
Thanks for the help!