Lip sync plans for Blender...

Only words of support from me.

Keep up the good work.

Blender needs this sort of thing!

Koba

Yes the same from me.

I’ve been using papagayo some and I love that sort of simple approach, even though their system feels like it’s half finished. This sort of thing can provide a real nice starting point for an animation.

I’m not too great at lip-sinc. Unless you’re autistic or checking lip-sinc you don’t tend to stare at someone’s mouth as they talk. using something like this for lip-sinc, especially if you can define your own face shapes - as many as you like - would be great. You can then touch up, add other keys for the rest of the face, and keyframe hand and body movement. THAT’s the important bit!

Yes - this lip-sinc thing is amazing - great idea.

+1 to that!

There is an old saying in the CG-world: “Technical directors build riggs and animators break them.” Automation may sound like a good thing in animation, and is when it comes to background characters and such, but when it comes to animating the “head characters” all the automated animation will need to be modified and “dirtied up” to make the movements look natural and non robotic. It doesn’t matter if its automated lipsync or walkcycles.
For example, when talking every time you say an “o” your lips won’t be o-shaped to the same extent. It depends on where in the word the o is, how intense the word is spoken etc. And to after the automated animation go back and fix things like this takes a whole lot of time, some times more time than if was animated from scratch. And with some automated things (I don’t know if this applies to this lipsync plug-in) it isn’t possible to fix things like this without totally breaking it (and this is the main reason why IK seldom is used for animating thins like arms in real production environment, it just takes to much time to correct the IK animation).
Fore these automated things in animation to be useful they need to be easily combined with manual animation otherwise the end results will look amateurish, and even if one is an amateur, amateurish results should never be a goal.

Good to hear you’re making progress. Any idea when we might see a working version or a patch?

Anyhoo, keep up the good work.

What happened to this project, updates seemed to have stopped ?
It still seems to manual … we could automate this even more with step 2 and 3 below…

  1. create the characters usual 6-8 mouth shapes for the different letters of speech

  2. use existing “voice recognition software” (and they exist with 99% accuracy which will definitely be good enough for most lipsync) to generate “tokens” (with time info) from the audio track. These can be stored simply as an array of the shapes of point 1) and at what time they occur

  3. simply convert these tokens into a blender “NLA phenom strips” automatically.

Lip sync proposal

What happened to this project?
Something to download - to use
Lip sync in Blender - how to do it?
:cool:
Thanks in advance!

Is the project still active?

This is a priceless addition to Blender, i certainly hope its still progressing.