No lipsync python scripts..?

I’ve been searching all through Blender forums, looking for a way to automate the lip sync process. It seems that lip sync must be applied manually, using shape keys.

If shape keys are all predefined, using a consistent naming convention, should it not be possible to automatically generate lip sync from an audio file, text file, or combination of the 2?

This would be really useful for making movies with Blender 3D. I would really appreciate it if one of you programming geniuses would take it on… :stuck_out_tongue:

Look for papagayo (there was an importer for the data that program generates)

Thanks for the link.

However, it is still a manual lip sync program like blender, one has to manually0 fit the phenomes to the audio track.

I’m talking about a script that automates this process. - testing it out myself about a week ago, I gotta get back to testing it again!

edit : here’s an update on this thread by ericsh6

Thanks for the link & input.

It is still a manual lipsync though isn’t it? One has to move individual phenomes into place to match the speech. However, there’s lots of software where one can just import the audio file, and the software reads it & processes the whole audio clip into phenomes. Like The Movies, or Moviestorm, etc.

Scripts already exist that analyze audio music files, converting them to IPO data that can be used to create synchronized animation - why not a script that analyzes a text file, or a combo of text & audio spoken words & process the whole thing?

I hope some python programmer(s) will take this up, it would be SO useful.

It is not manual as far as setting up the everything. First you do need to setup the phenomes, A, E, O, and so on like in this example link:

Enter your wav into the sequencer
set the shape keys for the first frame to Zero (so they show up in the action editor under shapekeys)
with your object/mesh still selected, run the script.
Select your .dat file from the papagayo program
hit the import button, then set each shape key for each phenomes (thats why its easier to name each shape key as A, E, O and such)
Hit GO and…you’ll see nothing…wow. yeah. Just move one frame and Oh! there we go now all the shape key animation set for you. A little adjusting and fine tuning will be needed yeah, but every program isn’t perfect, but yeah you don’t have to do it manually.

Hope this helps

Oh Wow thanks nicktechyguy I get it, finally! :spin: :slight_smile:

thumbs up Yeah it took me a bit too but it turns out it is a wonderful script. I wish it still were under development.

edit: Think I should set up a tutorial on how to do it with images and such on a site? I was thinking of making a site with tutorials for photoshop/blender/film type of thing

Good idea, i think more people will be making movies with Blender in time. Thats why I use it…:yes::RocknRoll:

I’m actually working on a movie project with live action/3D character animation. A “who framed roger rabbit” type style movie. I’ve asked before if anyone wanted to help but didn’t get much of a response. Maybe after I post some footage more people would like to get on the bandwagon. once I get the lip-sync working with my latest character “Carl” for the movie I might show some 3D animation footage with him.

Looks cool, great shading. Is that a cigar in his mouth? I do a lot of machinima, and use Blender for a lot of modelling conversions & sfx but I’d like to do a full movie in Blender someday, I’ve got a long way to go in Blender, but I’ve done VOs & music for quite a few short machinima flicks if you need help in those areas.

I´m using papagayo for export the wav into dat, my character in blender have now 8 shapes for mouth the phonems that the Blendersynchro v 2.0 needs, but still it only recognize one mouth shape the first one after the basic…? Is this script working or not, the version that i´ve tried is 2.48a and 2.49b…
If is not working someone have a better solution for automate lipsync…?