How to sync Animation with music

Hello fellow Blenderers,
I’m working on a project, and I want this part where lights and such change or move with the music, is this possible to do easily? If so how?
or
How can I import music so I can animate with it playing as the timeline goes?
:confused:
Thanks.:slight_smile:

Blender supports wav format files. So you need to either convert the music that you have into that format or download some wav files.

this might be an interesting topic: Midi driven animations

I’ve been doing this on Kata, and have found a few tips that have helped a lot:

On a PC using 2.49b or earlier (Macs do it different, I believe)

First, have all your music mixed to its final form, as a single track, saved as a WAV file. If there is a separate and distinct beat track (drums or other percussion) that you are going to be animating/ cutting to, make a separate WAV file for that as well.

Place the music/beat track(s) in the Video Sequence Editor, using the “RAM” option – this makes the audio waveforms visible in the VSE track, which can be helpful when planning moves/edits.

In the Timeline, use the Marker tool to set visible event marks along the track, assuming you have at least a basic idea of the moves you’re to be animating. These can and likely will change frequently, but they can really help establish “landmarks” along the way.

If you have a storyboard, you can make still images of the various key actions and place them as single-frame image sequences in the VSE, to help establish an initial timing plan.

Once you start animating, it’s unlikely you will be able to get good audio sync with the Timeline playback, or with ALT+A, even when using the sync options – frame dropping doesn’t make for very good evaluation playbacks. So instead set up to make “playblast” animations using the OpenGL rendering/animation option. This is very fast, can be done from any view angle in a 3D Window (User or Camera view), and can also be done with audio multiplexing – both audio and video recorded in sync.

Use the FFMPEG format option in the Scene/Format panel. This creates 2 new panels in that section – Video and Audio. In Video you choose the video codec (I use Xvid a lot, but that’s up to you) and in the Audio panel, enable the “Multiplex audio” button and choose an audio codec (I use MP3 but again that’s a choice you should make). Codec choices have to be made based on what’s available on your system, because your media player(s) also use them for playback.

Now when you do an OpenGL animation (“playblast”), the resulting AVI file will have both video and sound, so you can check both animation quality and synchronization with the audio. Be sure to enable the “Do Sequence” button when making these multiplexed playblasts to insure the audio is included in the movie.

In everyday practice, I do it slightly differently for Kata, because I use a lot of camera views with fast-cut editing between them. I use playblasts with no audio to get the animation into “first-draft” status, evaluating as I go along by using the PLAY button on Blender. Once an animated sequence starts to look close, I make “silent” AVIs from each camera view and place them in the VSE in a separate .blend file, then start rough-cut editing those to the placed audio track. This helps fine-tune the timing of the animation moves, and also the choices for camera animations. If you use the frame Stamp option when making your playblasts you can more easily identify places where changes need to be made.

Working back and forth between the two .blend files (the animation file and the VSE file), I can swiftly tailor the animation blocking to the music track, and get my “shots” established before I start tweaking the animation to a totally finished state. This helps prevent spending a lot of time getting the animation visually perfect only to find that the timing is just a little off and needs to be revised to sync properly.

Once the finalized playblast versions are placed in the VSE and test outputs (using FFMPEG) show the timing is right, you can then use the VSE edit (with frame-stamped video strips) to determine exactly which frames need to be final-rendered for your edited piece – this can save a lot of rendering time.

Some notes: Using compressed AVIs in the VSE can sometimes lead to odd glitches when the VSE writes out the edited video file. To prevent this, I write the “final” animation playblasts out as PNG image sequences and place those in the VSE to match the earlier AVI files, using the frame stamps to sync them.

If you’re writing AVI files using Blender’s FFMPEG format with its Xvid codec option, the resulting movies may not play in some viewers because FFMPEG writes the wrong FourCC code for Xvid to the AVI header. There’s a little utility app that comes with the Xvid codec distribution that can fix this.

Obviously, this doesn’t really qualify as doing things “easily,” but then there’s not much in Blender that can be done that way.

Another thing I find useful is to determine the BPM (beats per minute) of the music for the target frame rate of your piece. This, of course, assumes that the music stays at the same tempo for the entire length of the track.

After you know the BPM, you can determine the FPB (frames per beat).
http://vjforums.com/showthread.php?t=18898

It is great to have this number handy while you are animating. Then, if your wave form display is kind of complicated, or the music goes vague for a few measures you can still place key frames at this regular interval and you animations will still line up.

I calculated a few common FPB just the other day.

tempo = 110
fps = 29.97
fpb = 16.3

temp = 120
fps = 29.97
fpb = 14.9 (or 15)