[Addon] [Feedback wanted] Real time audio synthesis with Blender nodes.

Hi.

We decided to make a node based audio synthethiser. The Blender node system seemed the best way to implement this. So here is an early alpha of Audionodes. We only support Linux and macOS right now. See here for installation and usage instructions: https://github.com/nomelif/Audionodes

Eventually this should grow to be usable by DJs in live performances and for musicians to define digital instruments compatible with a MIDI keyboard.

Do not expect a finished product. Any and all feedback is greatly welcome.

EDIT:

We support Windows too, now. Also, MIDI on Windows and Linux should work.

Very interesting think. Good luck

Cool idea! Would love to try but I’m on Windows…

There is a sound synthesizer script I put into the group’s section you should check out as it will help you mix multiple frequency samples together, worth a quick study atleast also I have some other code to contribute to your project (graphing techniques, window’s api for audio and implementing a version into the bge). let me know if there’s any other feature’s you want help with too, I also am looking for a partner to distinguish sampled acoustic properties to get tablature from many musical instrument’s.

Thanks to urkokul and mbbmbbmm. BrockyL: Could you post a link? Also, the biggest problem right now is to disribute python libs. If you would happen to be knowledgeable in that, could you please help? On paper (read: untested) our code should run on Windows too, but we have not found a straight-forward enough way to install the libraries. We would like to package NumPy and PyAudio with the thing. That would also make life easier on macOS and Linux.

I think (without having consulted with the other person coding the thing) that bge is of little interest right now. The whole point is to modify the node setup in real time and entering the BGE freeses that part of blender.

Also, help with graphing would be nice. An oscilloscope node comparable to the Viewer in the composite tree was what we thought about. That is however of very little priority.

EDIT: Apparently Blender comes with NumPy. If you would know how to play a PyAudio array through aud, that would do it. Then no shipping of extra modules would be needed.

RE-EDIT: We decided to fight the fearsome PyGame, as we found a more-or-less meaningful way to bundle it.

Hello.

I’m the other person working on the project nomelif has mentioned. You could say I came up with the idea.

Anyways, I made little demo video showing off what our project can currently do (coupled with my imagination). Hopefully you who can’t use it yet can also get a taste.

hah, kind of like BlenderCollider (SoundPetal) https://github.com/zeffii/SoundPetal

good luck with this guys! great fun will be had.

the oss example I have is here: Groups->Uncategorized->Composer Group->“synthesized and not sounding good”. the win32api module implementation for all input/output usb and audio devices is on my website sites.google.com/site/abstractind it even show’s you the device name to look for in your registry. Edit: sorry still early for me I meant to point out that if you use the win32api by either linking through the preset %PATH% variable or directly targeting an msvc version using the ctypes module then you can have portability and drop the pyaudio/numpy(which does not come with any blender release I have seen) setup completely. I am working on wine so it’s a little different for me but I will have a window’s machine (just as soon as I reinstall it with my hardware ID’s(evidently if you change three pieces of hardware you have to reactivate it).

We greatly appreciate the gesture. We will surely go through those. Win32 won’t be ideal: we need NymPy anyway for the math and win32 would tie us to Windows (if I am wrong, please tell me). Neither of us actually uses Windows, so we would be more than glad to have some library take care of that for us. We ended up trying to port the whole thing over to PyGame: it installs in a few seconds via pip (that we will have to ship ourselves, but that won’t be such a problem). NymPy is apparently bundled with Blender. Also PyGame provided us with a MIDI interface for future use.

It must’ve required superhuman effort on the part of previous posters not to joke about the ‘feedback wanted’ in here!

Really cool project, actually. Getting even rudimentary audio synthesis into the nodes could make for some interesting animation where audio and visuals are linked!
How far are you expecting to take this? And would using the nodes to drive something like Pure Data behind the scenes be a good idea?

must’ve required superhuman effort on the part of previous posters not to joke about the ‘feedback wanted’ in here!

Or a truly formidable lack of perspective. :slight_smile: My point was rather to get feedback on the idea itself. I also thought shipping our dependencies wouldn’t be akin to driving a bus through the eye of a needle and thus we would have had something testable earlier.

How far are you expecting to take this?

Well, right now we are hunting the fearsome pip package manager to get PyGame shipped on Windows. Then we will attempt to debug our sound system with PyGame itself. Our top two concerns can be summarised thusly: actually shipping something.

After that, we thought that hooking up a MIDI device would be our next goal along with getting playback of audio files. Then filters along the lines of pitch bending.

At the end we thought of this thing as an instrument for a live performer. That differentiates it (to my understanding) from SoundPetal. Correct me if I am mistaken. Yesterday we were throwing around the idea of driving lights (read: spots on a stage) with the thing.

Getting even rudimentary audio synthesis into the nodes could make for some interesting animation where audio and visuals are linked!

This isn’t really on the roadmap (or at least we didn’t think of it) right now, but by virtue of being node properties everything is indeed animatable already.

And would using the nodes to drive something like Pure Data behind the scenes be a good idea?

Considering the somewhat comical contorsions we are having with PyGame, shipping this doens’t really sound realist as of right now.

Really cool project, actually.

Thanks a lot!

I thought I should document the current state of the dependency hell.

MacOS and Linux are manageable, but Windows causes trouble. As Blender is our only Python interpreter, we have to run the get-pip.py -installer inside our own script using whatever sorcery Blender happens to ship with at any given time. Also, we have to separately download the package and then attempt to install it. However, the last step fails. Fixing this would be akin to a minor miracle.

Here is the code if anyone happens to know more: https://gist.github.com/nomelif/c92e4011bd84bd33a210939d380f87f6

Would it be OK, that on macOS and Linux we do our job using pip (that is: a few seconds lapse before the addon becomes usable, no user interaction is required) and on Windows we launch PyGame’s interactive installer? This would be easy to implement.

Maybe I should comment on some of the visions (while nomelif is working on actually getting things functioning).
My vision is for it to be a somewhat full-fledged modular synthesizer. It can clearly be applied to a much more broad use case (microphone inputs, for example, would take it in a completely different direction), so this might not be the only aspect of it. This is where the original inspiration came from, however.

The next big things to allow for a broader soundscape would be filters and effects. For filters, mainly equalizers, low/high-cut and such. A delay-effect would be nice. Another important feature I need to work on is trigger signals. Right now, if you want to make a cyclic envelope, you’d probably use a saw-wave oscillator with a negative frequency (this essentially flips the waveform of the output). This could be done much nicer with a clock and an envelope node. A trigger signal could also be sent out by any physical input.

A small status update:

  • We got the pygame backend working on a variety of devices that Ollpu owns.
  • The patch was committed to git.
  • We have actually devised a plan to install stuff on Windows. The user needs to first install Python 3.5 itself. Using pip from Blender’s python interpreter is truly too sorcerous. Then the addon can manage its own packages. This has not yet been implemented. This would be nicer than having pygame be installed interactively as pip would pick the right pygame executable. Also, I would deem it more probable that people have Python 3.5 installed instead of having any specific module installed. Finally, having a generally more complete Python environment does make life easier.

It was implemented to allow a nodesystem to generate SuperCollider synthdefs, and sending them to a SuperCollider Server. The server would be responsible for receiving osc/midi input and outputting the Audio. But trigger from Blender’s nodeview to SuperCollider was possible. Sequencing triggers via the timeline in realtime is possible , but not likely to be very accurate in the time domain.

Anyway, for various reasons it didn’t get to any mature state. I haven’t stopped wanting to do some sound design with the Blender nodeview, hence my interest in this project.

Installing PyAudio on windows is almost trivial, I used PyAudio for a small example node for Sverchok a while back:

https://github.com/nortikin/sverchok/pull/815

– so it works if you can get users to fulfil the dependancies. I’d be happy to test on windows :slight_smile:

i’ll leave the rest of my feedback on the GitHub issue tracker, if you don’t mind.

OK, saw that just now.

Ok, here is a late update on the project:

1. We support Windows. We have only tested it on one machine, but it should work. The instructions are here with the repository itself: https://github.com/nomelif/Audionodes/

The installation is as simple as it could be made to be. You install Python, run a script and add the addon to Blender.

2. We support MIDI keyboards on Linux. EDIT: And on Windows too, now!
We are trying to find ways to support them on other platforms too. For now you get the frequency of the tone, the start time of the tone (for eg. volume fades) and velocity (how hard the key was struck). You can really construct a keyboard-controlled synthetiser now.

3. We revamped the documentation. It should be enough for people to install the thing on their own.

4. We have made a few demos. See here. They include the sound and an image of the network for future replication.

5. Misc. changes.
We have added half a dosen new nodes. As probably said before, we use PyGame. As not said before, our audio sockets now support multiple parallell channels. (Think multiple nodes in a chord playing at the same time.)

Hopefully people can install it now. It has been a much tougher job to actually port this to Windows than we ever imagined.

I made a screencast telling how to install it on Windows.

Quick bump: we should now support everything on Windows. That is: both basic synthesis and playing the nodetree with a USB MIDI keyboard.