Why I LOVE AudaSpace

Hello everyone.
I was boring, so I decided to post here just to tell someone and to interact.
Well, this is about AudaSpace. NeXyon has done a great job with his C++ code, and unfortunately only a small subset of AudaSpace is available as a Python API.

Let’s face it, AudaSpace is a sound engine with nuts and bolts, despite this, NeXyon says it needs a full refactor.

I know very well this is not in the real scope of Blender, but I think it’s fun to play with, also because it’s like a simplified PureData (VERY simplified).

By the way, it already allows to do something crazy/cool as is, figure out with a refactor and better Python API.

surprise.blend (470 KB)

Anyhow, I’m attaching a .blend file showcasing my simple, funny experiment.
Make sure you have the AudioNodes addon from NeXyon (you can download it here: https://github.com/neXyon/audionodes )

I hope you like this and, if you’re a sound guy too, maybe post here your work.


P.S.: Also have a look here: https://wiki.blender.org/index.php/User:NeXyon/GSoC2010/Audaspace
What would be AudaSpace with the ability to automate oscillators etc?

More to the point, what would we have if we could move all this stuff into Animation Nodes instead of its own separate node structure?

I looked at the chap’s GitHub, seems he does not want to do anything with it anymore, pity, but how is your Python, do you think between us we could move this into AN?

On a small technical note, I notice some crackling if I play longer .aif files, .mp3 files are really bad and crackly - but there may be something in this. I have not worked out yet where he has put the AudaSpace C++ Library, something for me to work on.

Cheers, Clock.

Having said all that, I did this VERY quickly:

And it plays the file when I advance the timeline, problem is that it plays from the start every frame… Work needs to be done I think… :stuck_out_tongue_winking_eye:

Cheers, Clock. :cocktail:

And work got done, here is where I am at :mage::

The new node (the disgusting pink one) takes an input note name, converts that into a frequency through a lookup function in an external .py file, then sends this to AudaSpace, along with the required duration, sample size and volume.

I am also squaring the wave as an option through the Square Checkbox and Square Trip value. I have built the node in such a way that it will not execute again until the note has finished playing, this I calculate from the duration and start frame for the note.

At the moment I am just driving this from my own “Periodic Trigger” node, which sends a True when the cycle length is reached at the phase point. So a Cycle length of 20 and a Phase of 12 sends a True on frames 12, 32, 52, etc. Next I need to drive this through an animation. Having four of these nodes has no impact on Blender’s performance, might be interesting to try 20 of them. :flushed:

I have also looked at playing sound files, this works well, so a “Sampler Synth” is possible, but probably a lot of work… If you know how we can apply more filters to the sound to, for example add distortion, let me know. I am also working on a Delay (Echo) addition, using the AudaSpace functions.

Perhaps we should consider getting this thread moved somewhere else, to get more people involved in the testing and development.

Cheers, Clock. :musical_keyboard: :beers: