Sound Block Delete


How do I remove a .wav file after I’ve loaded it into a sound block? That is to say, how can I delete a sound block? I can’t find anything like it in the datablock browser, there’s nothing in the “Wave” directory.

Also, can somebody confirm for me that the sound block is only for game stuff? For things like lip syncing for ordinary animations, this seems to be pretty much useless, it seems that it’s necessary to use the sequence editor for any sound-related work. All the tutorials or threads I can find on lip syncing refer to Magpie or other similar software. This doesn’t seem to be something that’s very well handled using Blender alone.



Audio sync animation is done using the sequencer. Just ADD th audio file and position it. Then F10 and open the sound block module. Press SYNC and SCRUB in the sequencer panel that you see here.

Sync controls playback in the 3D window so it syncs with the audio. Scrub plays back bursts of audio as you change frames or drag a timeline in a window.

(Unfortunately, for me, it generally results in crash after crash in 2.4x. so it looks like audio animation is off limits for me until someone can offer a solution or even an explanation - but that’s another story…)

I havent used Magpie or Papagayo (I d/l’d papagayo but havent unpacked it yet). I feel it would be better to come to grips with manual syncing before relying on software to do it for you.

Buggered if I know how to delete a sound block once it’s added though :o

Hi AndyD, thanks for the response.

Well, okay. That’s kind of what I thought. For me, the sequencer playback isn’t producing sound during animations, for one thing. It is playing sound when I press “play” (oh, and for pete’s sake why is there no “stop”?), but not during the animations with sync, scrub, or both pressed. For another thing (related, I dunno), my animation is a bit big and heavy, so “realtime” playback in the 3D window is very slow in any case, and seems to be even slower when I run it with the sound on (note that no actual sound comes out regardless).

As for Papagayo and Magpie, I am still working out what they do. Papagayo and maybe Magpie Pro seem to basically try to tell you where certain phonemes occur. You could call this non-manual, but, well, Papagayo wouldn’t respond at all when I tried to load a .wav file, so Papagayo was a wash. JLipSync (I think it’s called, it’s mentioned here and there in Elysiun) I managed to get working. It opened a .wav and attempted to map the phonemes from my text onto the wave. It was laughably wrong. It basically mapped the entire text onto the first second or so of the soundwave. Maybe it was the size of the text I tried to read in, but neither software mentioned a limit, and in any case, if I have to parcel it up in bite size pieces, I don’t know what the point of “automatic” phoneme recognition is. It’s not that hard to eyeball it.

So what I’m using is Magpie, the shareware version, and what I’m doing is certainly manual. Magpie basically plays the sound file, and allows me to play sections and associate them with a mouth. By hand. And then play back the sequence of mouths to see how it syncs up. Basically the same thing as would maybe be doable in Blender itself if the animation wasn’t slowing things down and the sound was coming out. Then Magpie allows me to create a frame-numbered list of mouths. I then use that as a reference (Magpie Shareware doesn’t seem to allow export to any normal formats like txt, and the “copy to clipboard” option mentioned in a tutorial didn’t put anything on my clipboard, so when I say “use it as a reference” I mean “keep it open and look at it while I’m lip syncing”.) Of course, with Blender shape keys you need to set basically three keys for every mouth position, (0-1-0) so even this Magpie list doesn’t give you anything exact, just a ballpark for where you will want your mouth positions.

Anyway, after posting here previously I rendered my 3D window, stuck it in the sequence editor with the wav, and took a look at it. The lip syncing looks pretty darn slick, actually, much smoother than the mockup in Magpie had looked (this is due to the sliders on the positions, obviously). So I think I’m on the right track. The timing seemed pretty much right on.

My point is that if you’re interested in doing lip syncing, Magpie seems to be the tool that will help you do it properly. You’re not going to wish the process was any more manual than this, I guarantee. I was disappointed that no open source software I could find did what I needed, and that there’s not a better solution for this in Blender. If anybody knows of a simple open source piece of software for this, which does not try to do automatic phoneme recognition, please let me know.

As for that sync-sound block. Anybody? Because it seems to have added a lot to the size of my file, and it’s really bugging me, since I can’t use it. Also, I’m always looking to add to my stock of knowledge about the arcane art of deleting things in Blender.

Hmmm, audio should playback - unless the soundblock thing is preventing it maybe? I could say “just delete the sound block file” but you’d probably shoot me :wink:

Try a simple test in a new file with a small .wav and a cube with scrub and sync and see how that goes. If that doesn’t work then you’ve got a new question to ask I guess.

The only tute I’ve seen on Magpie was on Blenderchar which is gone now but I think that tute was mirrored in the docs so it’s probably the one you used anyway.