I’ve been trying to edit / animate in sync to audio on and off for YEARS in Blender without any luck. I’ve tried on at least 7 different machines using different soundcards, using versions of blender as far back as 2.4.2 and as recent as today’s SVN.
Still, I can’t seem to get Blender to consistently sync sound in the VSE.
Right now, I’ve got a 44khz wav and a simple blender scene (1 plane with a texture). I’m trying to sync the animation with the audio, but the sync is always out. I tried timing the animation to the sound output when I push alt+a, but what audio plays at frame x seems to shift regularly. I tried timing it to the visual waveform in the VSE, but it created a file out of sync when I rendered.
I’ve tried using both openAL and SDL. I’ve tried caching the audio in the RAM (of which I have plenty). I’ve tried using drop frames and lowering the scene proxy to 25%. I have a brand new beast of a machine.
I really need to get this job done and I’m going more than a little crazy. :eek:
Does anyone have any ideas on getting audio to sync in Blender?
Perfect synch in Blender’s UI is a grail hunt. You can come close using the AV synch button (which will drop animation frames) but even then you’ll get some slippage even with relatively simple scenes. That being said, I’ve edited a number of animations to very tight synch using only Blender and an external playback app like Virtual Dub or VLC player. The trick is not to depend on UI playback.
If you set your WAV file in the VSE it should playback consistently. Where any one sound occurs during UI playeback (Alt+A) will depend on the output frame rate AND on the ability of the UI to play back effectively. If the frame rate for playback drags below the set frame rate, the UI will drop animation frames to catch up but this can also make it appear the sound is falling at differnt frames, but that’s just a transient effect due to the frame dropping. Using the visual waveform for roughing in action to sound peaks is OK but it isn’t accurate enough for final synch imo.
Since the UI playback can rarely keep 100% synch, I work around it by rendering test sequences to movie format using the OpenGL rendering option. OpenGL is relatively fast, can be used for either User views or the Active Camera view, and if set up properly can write a synched audio track along with the the OpenGL visuals. If you enable the frame Stamp option you can use the resulting “pre-audio-viz” movies to check your synch down to the frame if need be, and determine where and by how much to shift the action to match the sound.
Writing OpenGL “pre-audio-viz” sequences is an extra step in the workflow but also very accurate, as the resulting synch is just as accurate as a final movie written from Blender would be.