Hi AndyD, thanks for the response.
Well, okay. That’s kind of what I thought. For me, the sequencer playback isn’t producing sound during animations, for one thing. It is playing sound when I press “play” (oh, and for pete’s sake why is there no “stop”?), but not during the animations with sync, scrub, or both pressed. For another thing (related, I dunno), my animation is a bit big and heavy, so “realtime” playback in the 3D window is very slow in any case, and seems to be even slower when I run it with the sound on (note that no actual sound comes out regardless).
As for Papagayo and Magpie, I am still working out what they do. Papagayo and maybe Magpie Pro seem to basically try to tell you where certain phonemes occur. You could call this non-manual, but, well, Papagayo wouldn’t respond at all when I tried to load a .wav file, so Papagayo was a wash. JLipSync (I think it’s called, it’s mentioned here and there in Elysiun) I managed to get working. It opened a .wav and attempted to map the phonemes from my text onto the wave. It was laughably wrong. It basically mapped the entire text onto the first second or so of the soundwave. Maybe it was the size of the text I tried to read in, but neither software mentioned a limit, and in any case, if I have to parcel it up in bite size pieces, I don’t know what the point of “automatic” phoneme recognition is. It’s not that hard to eyeball it.
So what I’m using is Magpie, the shareware version, and what I’m doing is certainly manual. Magpie basically plays the sound file, and allows me to play sections and associate them with a mouth. By hand. And then play back the sequence of mouths to see how it syncs up. Basically the same thing as would maybe be doable in Blender itself if the animation wasn’t slowing things down and the sound was coming out. Then Magpie allows me to create a frame-numbered list of mouths. I then use that as a reference (Magpie Shareware doesn’t seem to allow export to any normal formats like txt, and the “copy to clipboard” option mentioned in a tutorial didn’t put anything on my clipboard, so when I say “use it as a reference” I mean “keep it open and look at it while I’m lip syncing”.) Of course, with Blender shape keys you need to set basically three keys for every mouth position, (0-1-0) so even this Magpie list doesn’t give you anything exact, just a ballpark for where you will want your mouth positions.
Anyway, after posting here previously I rendered my 3D window, stuck it in the sequence editor with the wav, and took a look at it. The lip syncing looks pretty darn slick, actually, much smoother than the mockup in Magpie had looked (this is due to the sliders on the positions, obviously). So I think I’m on the right track. The timing seemed pretty much right on.
My point is that if you’re interested in doing lip syncing, Magpie seems to be the tool that will help you do it properly. You’re not going to wish the process was any more manual than this, I guarantee. I was disappointed that no open source software I could find did what I needed, and that there’s not a better solution for this in Blender. If anybody knows of a simple open source piece of software for this, which does not try to do automatic phoneme recognition, please let me know.
As for that sync-sound block. Anybody? Because it seems to have added a lot to the size of my file, and it’s really bugging me, since I can’t use it. Also, I’m always looking to add to my stock of knowledge about the arcane art of deleting things in Blender.