Using audio to drive animation

Hi there,

Does anyone know if it’s possible to use audio to drive animation in Blender? For example, is it possible to use the amplitude to control the position of an object? Or, even better, to be able to do a frequency analysis and use only the bass to control an object? I’m looking for a “high tech” sound-to-light type system… the sort of thing that might be useful for doing music videos, especially for the stranger electronic music out there.

Ultimately, what I need to do is try to “reverse engineer” music as much as possible and use these “reversed engineered” elements to drive animation.

Many thanks,


I would suggest to use audio-software to analyse the timing. Then adjusting IPO-curves after that. But perhaps someone else have a better solution? If your even interested at all.

Thanks for the reply.

Yeah… at the moment I do exactly that - I study the waveform in an audio package and then manually enter animation parameters… but I’m wondering if some clever person has written some code to automatically pull information out of an audio file?


You could use Mathlab. There’s a FFT command that you can get data out of frequencies. It can be stored in a text file for input to blender via python. But exactly how I don’t fully know python.

That’s a good idea, thanks - I know a bit of MatLab.

(and it’s free equivalents: Scilab and something else)


Oooh… I just had a thought…

…does anyone have any experience of coding the visualisations in media players like XMMS / Windows Media Player? Where do those little visualisations get their “animation cues” from? Does XMMS do some audio analysis to drive the visualisations?