3d audio equalizer particle field

Recently I wrote a script that bakes an audio files amplitude to the scale of a long line of equalizer bars … the script makes each bar responsive to certain frequency’s
Then the script makes each bar a particle engine and the result is that I get a beautiful 3d reinterpretation of the audio wave form

To enhance the feel of the hole thing each bars scale is also scripted to drive its own particles size and Translucency.

so Far I have some wonderful results :

However After doing a lot of tweaking I cant seem to get the resolution of the frequency bars to be very hi. If I sing a g flat id like the g flat bar ALONE to move… but instead every bar in that octave responds giving a very rounded topography in the particle field.

When I bake the audio data to the scale Im only using the freq range filter for each bar.
I see that there are other attributes like Like attack but I don’t really understand how to use them.

Can anyone help?

Interesting, so you are creating a particle system for each band?

It is hard to help without seeing the script.

correct… its pretty large script but here is the part that bakes the audio


def BakeAudio(i,step):
    bpy.ops.anim.keyframe_insert_menu(type='Scaling') #inserts key frame data set for scaleing of active object
    bpy.context.active_object.animation_data.action.fcurves[0].lock = True # locks the x value of the curvs
    bpy.context.active_object.animation_data.action.fcurves[1].lock = True #locks the y value of the curves so they cant be changed
       
    bpy.context.area.type = 'GRAPH_EDITOR' # causes the current window pain to switch to the graph editor
    
    bpy.ops.graph.sound_bake(filepath = "filepath", low = step , high = step + step)   # tells blender to map the amplatude of and audio file to the f curve of the scale of our object
        
    bpy.context.active_object.animation_data.action.fcurves[2].lock = True # this locks the z scale of the current object so it cant be tampered
    
    bpy.context.area.type = 'TEXT_EDITOR' # this changes the window frame back to the txt enditor.

And the portion of code im asking about is in:
bpy.ops.graph.sound_bake(filepath = “filepath”, low = step , high = step + step)

Where I have defined the high and low frequency cutoffs and could define many more things found here
http://www.blender.org/documentation/blender_python_api_2_61_release/bpy.ops.graph.html?highlight=bpy.ops.graph.sound_bake#bpy.ops.graph.sound_bake

but do not really understand how they work.

Hey wow so you have and you code is so different than mine! Im new to python and I can barely understand it!
Very very cool thanks so much for the reply.

Im using linear my variable “Step” is simpy definded like this:
Step = MaxFrequency / NuberOFBars

I see that in your code you have an option to use log

LOW = log10(self.minf)
HIGH = log10(self.maxf)

I know that log creates a curve on a graph but i dont really quit understand how it would effect things in this case.

As can be seen here http://en.wikipedia.org/wiki/Piano_key_frequencies or http://www.techlib.com/reference/musical_note_frequencies.htm the frequencies increase exponentially. When this is the case it is often more appropriate to use a log scale to divide the range. Like the richter scale for earthquakes.

If you look at the chart on the wiki the notes on piano go from around 1Hz to 4kHz If i split that up into 10 equal chunks, ie 400Hz per band I am already around A well past middle C in one step.

PS were you able to run my equalizer script?

please can you send me this i would love to be able to use it when it is done…