Real time Lip sync in Blender

Hi ,

Is it possible to make a 3d model in blender which will lip sync to the spoken words through a mike attached to the PC?I dont mind if the lip movement happens in the unrendered 3d view.

Any idea as to how to achieve this or whether we can achieve this(achieving in a rough manner is also ok)?Any python script for this.
I know about Magpie etc…but these dont give a real time output …or do they?

which will lip sync to the spoken words through a mike

You mean the model is animated by the .wav file? Is this even available commercially?

Any lipsync you do in Blender you are going to have to learn the program and do some work.

%<

Motionbuilder does that realtime with a mic.

Hello fligh %
Yes I want the thing to be animated using a wav file.

Hello tin2tin
Motionbuilder is expensive i guess.
Are there any other freeware/shareware which can achieve a similar task?

http://www.lostmarble.com/papagayo/index.shtml

Really though, if you do a google search for freeware lip synch stuff you’ll find something that works for you.

Hello,

For the realtime, i thing it’s really hard. You must make a speak analyse in realtime, then transform your word into phonems, then animate your model.

For non realtime, you could use Papagayo and BlenderlipSynchro (https://blenderartists.org/forum/viewtopic.php?t=53904), a python script that import the Papagayo’ work directly to your shapes.

Dienben