[AddOn] Sound Drivers ( formerly speaker tools )

Drive animations with sound. Copy the sound_drivers folder into one of your addon folders.

GITHUB: https://github.com/batFINGER/sound-bake-drivers
ZIP: https://github.com/batFINGER/sound-bake-drivers/archive/master.zip

For an addon i’m running needed to play around with bake sound to fcurves… here is a little test script that creates an n channel graphic eq. baked to custom properties Channel0,Channel1… onto the context object. These can then be used as targets for drivers.

Does anyone know if a glob/filetype can be set using the FILE_PATH subtype?

import bpy
from bpy.props import *
from math import log10

def fetch(collection,name,create=False,**newargs):
# returns item from collection , None if not there, creates if create is True passes newargs to new method.
    if name in collection:
        return collection[name]
# maybe put in code for replace too.
    if create:
        if len(newargs):
                newitem = collection.new(**newargs)
                newitem.name = name
                newitem = collection.new(name=name,**newargs)
                newitem = collection.new(name=name) #for things like scene objects
                newItem = None
        return newitem    
    return None

class SoundTestPanel(bpy.types.Panel):
    bl_label = "Sound Test"
    bl_space_type = "GRAPH_EDITOR"
    bl_region_type = "UI"
    def draw(self, context):
        sce = context.scene
        layout = self.layout

        if sce.SoundFile != "":

    def poll(self,context):

        return context.object is not None 

class SoundTestOp(bpy.types.Operator):
    '''Sound Test'''
    bl_idname = "sound.test"
    bl_label = "Sound Test"
    channels = IntProperty(name="Channels",default=3,description="Number of frequencies to split",min=0,max=24)
    minf = FloatProperty(name="Min Freq",default=0.0,description="Minimum Freq",min=0,max=10000.0)
    maxf = FloatProperty(name="Max Freq",default=10000.0,description="Maximum Freq",min=100.0,max=1000000.0)
    use_log = BoolProperty(name="Log Scale",default=False,description="Use Log scale for channels")
    def invoke(self, context, event):
        #create a Sounds action

        wm = context.window_manager
        return wm.invoke_props_dialog(self)

    def draw(self,context):
        sce = context.scene
        layout = self.layout

    def execute(self, context):
        # set to frame 1
        ob = context.object
        soundaction = bpy.data.actions.new("%dChannelEQ"%self.channels)
        #soundaction = fetch(bpy.data.actions,"Sounds",True)  #need to delete baked curves to use
        if not ob.animation_data:
        ob.animation_data.action = soundaction
        if not soundaction:
            print("NO ACTION")
        use_log = self.use_log
        if use_log:
            if self.minf == 0:
                self.minf =1
            LOW = log10(self.minf)
            HIGH = log10(self.maxf)   
            RANGE = HIGH-LOW            
            LOW = self.minf
            HIGH = self.maxf   
            RANGE = HIGH-LOW
        file = context.scene["SoundFile"] # do some proper file check here.
        n = self.channels
        for i in range(0,n):
            freq = 'Channel%d'%i
            ob[freq] = 0.0
            low = LOW + (i)*RANGE/n
            high = LOW + (i+1)*RANGE/n
            if use_log:
                low = 10**low
                high = 10**high
            soundaction.fcurves[i].select = True
            bpy.ops.graph.sound_bake(filepath=file, low=low, high=high, attack=0.005, release=0.2, threshold=0, accumulate=False, use_additive=False, square=False, sthreshold=0.1)
            soundaction.fcurves[i].select = False


def register():
    bpy.types.Scene.SoundFile = StringProperty(name="SoundFile",
        description="Sound File for Equaliser Input",
        maxlen= 1024,
        default= "")
    bpy.context.scene["SoundFile"] = ""

def unregister():
bpy.ops.graph.sound_bake(filepath="C:\\Documents and Settings\\batFinger\\My Documents\\Downloads\\burnout.wav", filter_blender=False, filter_image=False, filter_movie=True, filter_python=False, filter_font=False, filter_sound=True, filter_text=False, filter_btx=False, filter_collada=False, filter_folder=True, filemode=9, low=0, high=100000, attack=0.005, release=0.2, threshold=0, accumulate=False, use_additive=False, square=False, sthreshold=0.1)


Added log scale.
To do… file checking and setting up other props to pass to the sound_bake op. Unbake option.


speaker_tools_2.63_r50555.zip (27.2 KB)

Did you get anywhere with this?
I tried force feeding it a valid local file path and I still get an error.

Could you elaborate on the error?.. works ok for me. To run , run script, go to props panel in graph editor, Select an object, select a an audio file, hit the op and it bakes the fcurves to ID props.

The error I am getting is ‘Unsupported audio format’ on line #114.

bpy.ops.graph.sound_bake(filepath=file, low=low, high=high, attack=0.005, release=0.2, threshold=0, accumulate=False, use_additive=False, square=False, sthreshold=0.1)

I have tried both WAV and MP3 file formats and they both do the same thing. Maybe this SVN version 43096 I am running does not read WAV or MP3 files…?#@

Dunno Atom,

works fine for me with both wav and mp3 files in r43053 on XP w32. If someone else could check it would be most appreciated.

Hoping to see some bopping bleebles soon.

The next thing i want to do with this is add a menu to the rightclick on property menu to add a select channel menu to drive the prop.

You might want to rename it from ‘test’ in the panel. I was not sure if I was actually generating something or a check was being made. I am on XP64 maybe I will pull down a 32 bit version. I do hear the sound via the VSE, however.

Ok found some audio files that don’t work… the error comes from the “bake sound to fcurve op”. Not much I can do about that. Most of the time I’ve done some filtering and exported to wav format using audacity and haven’t come across this error before.

ok so for me your EQ.blend file works great.

but I haven’t got your script working for my own audio file yet … Im working on that now
it it dose not seem to like .mp3 files

The script works for me with a wave file but it dose not actually make the selected object move

It bakes the f-curves and adds custom properties. After that I had to create
the z axis scale driver and assign to it the custom properties RNA data path buy hand …

Is that how its intended to work?

cool script!

Yes that’s the way it’s intended to work. Looking into a way of using a menu to add a driver from the right click menu over the property.

I just pulled down this your BLEND file to try it out.
I open the file and run the script.
Then I switch to Default layout mode and press Play.
I hear the sound and press Rewind.
Blender 2.6.1 r43008 crashes.

Is there a certain minimum Blender version I should use with this script?

The file is intended more as a holder for the scripts than a sample… There is a sound file in there i used to check it… i think this may have not packed correctly. Doesn’t crash for me r43420

Try the scripts in a new file, add a speaker the “visualiser” should become available in the speaker data panel.

also forgot to mention you have to hit the “On” to have the sound be the one that is used to add a driver.

I tried it on r43224 and no crash so yeah!

I am still trying to understand how to actually use this.
Here is my workflow…
Create a Speaker object.
Switch to the Data TAB for the speaker object.
Browse to a wave file. (I notice only mono is supported by Blender:()
Click Add New Spectrum and accept the default setting by clicking OK.
Your script changes my properties window into a graph editor. Is this on purpose?

Now I am at a loss at what to do.
I have no f-curves in the window.
I can still hear my WAV file when I click Play but nothing happens in the scene…
Argh! I am checking the console and I still see the dreaded Unsupported Audio Format I guess Iam going to have to report this bug in Blender. I have supplied various stereo and mono wave files at 44.1Khz, 22Khz uncompressed WAV file. I have tried files outputted from Sonar, Ableton, After Effects and Quicktime.

How do you make your WAVe files?
On a side note, I extracted your WAV file and opened it up in VLC and VLC can not read your file (on Windows). Quicktime reports it as a 11Khz 16bit Mono WAV. I converted my audio to 11Khz 16bit Mono WAv and it still crashes Blender.

It sounds like it is falling over at the bake sound to fcurve step. I was hoping using the speaker’s sound import would fix that. I’ve put in the trusty ol try… catch to stop it getting stuck in the graph editor. It needs to be in graph editor to get context for the bake op.

Without using my script go to the graph editor add a keyframe then select “bake sound to fcurve” op from the key menu. Try loading your sound files from there. This is the step where you are getting the unsupported audio format…

It makes sense for the speaker object to be only mono. If you have stereo for instance to get the effect you need two speakers in your scene, just like you have two on your stereo headset or hi-fi.

Mostly the wav files i use have been saved to one channel wavs from other formats or wav using good ol audacity with IIRC no particular changes to the default settings.

Made a few fixes/updates

Wired in all the bake sound to fcurve properties
Made the slider mins and max the min and max value of the sound
Added the min and max value of the sound to a custom prop on the action (note to self add to the speaker data instead) so the values can be normalized. (min 0 max 1)
Added a try catch incase the bake sound op spits on the file.

There is a sample wav file in the zip that works for me.

To run.

Run equalizer.py from the text editor.
Add a speaker object to the scene
In the speaker data panel add a soundfile to the speaker.
Click the add sound spectrum button
Choose your settings and click Ok … if all goes well you will have a bunch of sliders
Click the ON … (oops forgot to fix this bit properly) this makes this sound the “context” sound

Next bit… I’ve kept this part seperate because it overwrites the View Docs operator. Using the right click context menu seems to me to be the simplest place to add custom drivers from. Overwriting the the op is “hackish” I’ll put in a feature request to have a menu available to python in the toolbox, till(if?) then hack hack i suppose.

Run eqmenu.py from the text editor.

Right click over any property of any object
Choose “View Docs from the menu”
If all has gone well you will see a menu of the channels
Click on one and it will add that channels value as a driver.

Still very much a WIP. Any feedback would be most appreciated. I’m working on merging this with some NLA lipsync code hence the NLA track stuff.

@batfinger: I gave this script a try on OSX and it does work, no crash like the Windows build. I have generated 12 channels from a mono wave file.

The next step I get an error, however. When I right-click on a channel and choose View DOCs I get another panel that pops up. When I click on one of those channel buttons I get an error.

1/25/12 12:22:43 PM [0x0-0x1719718].org.blenderfoundation.blender[70283] Traceback (most recent call last): Notice com.apple.console 
1/25/12 12:22:43 PM [0x0-0x1719718].org.blenderfoundation.blender[70283]   File "/EqMenu.py", line 35, in invoke Notice com.apple.console 
1/25/12 12:22:43 PM [0x0-0x1719718].org.blenderfoundation.blender[70283]   File "/EqMenu.py", line 8, in main Notice com.apple.console 
1/25/12 12:22:43 PM [0x0-0x1719718].org.blenderfoundation.blender[70283] IndexError: bpy_prop_collection[-1]: out of range. Notice com.apple.console 

Ah ok… yeah hmmm…

Not sure how to fix that at the moment. Adding drivers using the op but then looking for them with


So some props will work some wont. For instance world exposure wont work, cube loc rot scale and most things on the materials panel like intensity and colour will work… I’ll try and get my head around how to work out for non object types…

The Equalizer.py portion seems to work fine. I can link the z-scale of the cube to one of the channels and it moves to the beat.

Fixed a few things. Took out some bugs mostly.
More success with adding drivers from context menu

Using this code to find where the new driver was created

def finddriver(self,context):
    channel = self.channel
    driver = None
    #tempted to ust bpy.data.objects meshes ...
    search_list = [context.scene]
        if context.world is not None:
        dprint("NO CONTEXT WORLD")
    if context.object is not None:
        if context.object.data is not None:
        for mat in context.object.material_slots:
    for search in search_list: 
        #search list could be extended for texture_slots[...].texture etc
            driver = search.animation_data.drivers[-1]
            dprint("DRIVER TYPE %s %s  %s"%(driver.id_data,driver.data_path,context.window_manager.clipboard))
            if driver.select and driver.data_path.find(context.window_manager.clipboard) > -1:
                print("return %s"%driver.data_path)    
    self.report({'WARNING'},"No Driver found ... REMOVED")
    print("No driver found")

Feel there is a much simpler way to do this. Wired in materials not textures.
All drivers in the sample file were added with the menu.

Few more fixes.

Put an amplify and normalize var in the driver method.
General clean up.

The sample has some drivers on a particle system / force field these were added with paste driver. The previous post could be extended to search these.


EQ2.blend (1.29 MB)