One day I really should look at these software/scripting options and see what I’m missing. I’ve assumed (without bothering to find out) that you still have to tell Papagayo or Magpie or (damn, I can never remember the name of the Java one… Jlipsync or something like that) where to place the shapes and it produces a timed text file that you then feed into Blender as a script. OR are these softwares more intelligent than that and actually work out where to put the shapes for you while you go and have coffee? Do they just “read” a text file and determine phonemes from that? Are they able to match these shapes with the audio in order to set the timing?
If my first assumption is basically correct, then I’m lost as to where the time saving is??? In the same time you sit and tell Papagayo or whatever where to put the shapes, you could set a slider in Blender and have the shape set there and then - couldn’t you? You still have to make the basic phoneme shapes either way - but in Blender you have complete freedom as to how far to push each shape as you go - after all, one “O” is not always the exact same shape as every other “O”, it all depends what comes before and after. Plus, for some characters you’ll get by with four or five basic shapes that you can combine on-the-fly to create a whole range of useful shapes whereas with an automated solution, such combinations are presumbly not possible and you’d be required to provide shapes for each phoneme you need.
Of course, if my assumptions about the third party helpers are way off the mark then feel free to ignore my ignorance I just find the lip-sync process so straight-forward that I’m finding it hard to believe there’s an easier way :eek: Maybe I just don’t want to believe it.
(“reigning expert”??? Wow! )