I’m looking for a quick and dirty, fully automated lipsync solution that I can use for animation with blender.
What I’d like to do is to use a .wav / sound file as an input and get something that I can work with ( ex: a text with some phoneme / time information, or some animation curve data).
I’ve tried to import a .wav inside blender and convert it to a F-Curve, then mix it with some random animated shape keys, but I was wondering if there isn’t a slightly better solution
My project is a cartoony / basic style animated serie, so having a not-so accurate motion can work, It’s quantity over quality here
I’ve found about a software called Smartbody, but maybe it’s a bit overkill , I haven’t succeeded in compiling it anyway . Maybe I should look more into it.
There is also FaceFX, but a bit too expensive/overkill for what I’m planning.
Papagayo / Jlipsync : I’ve tried them, I’m looking into something where I don’t need to type text and tweak phonemes.
So have you got any clues about some piece of software or python library that can be used ? Or , some tutorial, general informations about the subject ? How would you do it ?
I can do some python scripting, if I manage to make something that work I can release my work as an addon or share my results.