We're In The MIDI & Audio & DAW Real-Time!

Automatic Arpeggio generator now in place, automatically generates arpeggios, either climbing, or falling, based on input note:

Works in 3,4 & 5 note flavours. :musical_note:

Cheers, Clock. :beers:

Arpeggio now plays up to 9 notes, so full chord and down again, each magenta note in the sequencer plays an Arpeggio sequence. I have also added a Chord player, 3 to 5 notes based on input note. For “Fancy” chords, one would have to construct them from notes in the normal way.

Cheers, Clock. :drum: :musical_note: :musical_keyboard:

PS. Thanks to NeXyon we will soon have a modulate() function in AudaSpace to play with. :yum:

Now we have a node (on the left) to load sound clips into VSE:

And modified Sequencer interface:

All the labels and volume controls keep up with the camera as the song is played.

cheers, Clock. :cocktail:

With the latest build of Blender 2.8, we now have a Square Oscillator to add to the Sine. Sawtooth, Triangle and Silence and a new modulate() function that allows me to modulate one sound with another, so I can now do daft things, like this:


I haven’t had so much fun since, well I can’t remember since when.

Next I need to make a serious 4-Oscillator FM Synth, I need to think about this for a while…

Cheers, Clock. :cocktail:

Now I have built a “proper” FM Synthesiser, four separate oscillators, each can have different wave forms, separate start delay for each oscillator so I can phase them, have them as rising, or falling harmonics and vary the levels. I can then add any number of filters to each oscillator.

Here it is working with the sequencer, it’s just a single note operation at the moment, I need to look at making it play up to 10 notes simultaneously:

Synthesiser Nodes:

Initial sound analysis for the setup shown above:


Seems very versatile, I am so pleased with AudaSpace and it’s new functions in the latest Blender 2.8 (5th April 2019 flavour).

Cheers, Clock. :musical_keyboard: :musical_note: :yum: :cocktail:

Now I have a VoCoder:

This modulates an OSC Generated sound with a sound file, you would normally use a voice file here to get. “Robot” like voice. In reality you can modulate with any sound file. Important thing here is to match the "sample"s for both the OSC and Sound clip. I also added an LFO Modulator to make the sound even weirder (if that’s a word, probably I should say “more weird” - that’s today’s English lesson).

I just looked at the Info in Finder to get the sound clip’s samples value:


Cheers, Clock. :cocktail:

1 Like


Just a small, simple test to show the system working. the control empties are animated automatically by my MIDI Bake node, all the rest is done with my new DAW & SOUND nodes.

Cheers, Clock.

Some shots of the Sequencer, now with controls (these are controlling, Volume and LFO Frequencies):

This is the full node tree:

I think now I have most of the functions I need, so I will now go through all my code, there are 35 new nodes in the suite and one functions file and make sure it all looks good then upload to GitHub.

Cheers, Clock. :cocktail::cocktail::cocktail::cocktail::cocktail::cocktail::toilet::face_vomiting: - :rofl:

1 Like

XFirst go at a Sampler Synth, such. lot more to do, but I now have a framework to work to:

The notes objects determine the note played by their position, the actual note is calculated by its Y value, timings by its length and X value. These are then fed into the control node and onto the big yellow one, which works out which sound clip to play. I downloaded a full set of piano note clips, which are stored by note name as their file name.

The only issue with this set is that they all start to produce sound at different times from the start of the file, so if I play C4, E4 & G4 I find the sound start times vary by up to 0.73 of a second. I need to either edit the bloody lot in some programme, like Blender using the “Cut at Frame” methods, or similar. :angry:

I really want to get my sound files from Reason, but I can’t, yet!

Next on the ToDo list will be a BeatSlicer, so I can cut a sound into slices then re-assemble it in a different order, or pitch bend the slices, etc. It’s just. time thing, I have the pitch bender and slice routines in other nodes.

Cheers, Clock. :cocktail: :musical_keyboard: :musical_note:

Prototype Sound Slicer and Re-Joiner:


  1. Load Sound File.
  2. Cut Sound File to a set length at a set position.
  3. Slice Sound File into divisions (input top left).
  4. Re-join Sliced Sounds as per order in the “ReSlice Order” text box.

Note you can cut into 10 slices and then reassemble using 17 slices in the example. Next I will investigate reversing various slices, or maybe even changing the pitch… I clipped the sound at 30 seconds and a duration of 10 seconds.

Cheers, Clock. :cocktail: :yum:

The node has grown a little and now acts like a loop-splitter-player (a bit like a Dr REX, if you know what that is…):

So, each level plays the next slice from the sequence. I also made this node to work out all the frequencies for any semitone above, or below, a given note, or frequency:


Saves me having to remember the formulae. :brain:

Cheers, Clock. :cocktail: :crazy_face:

Here’s the Loop Slicer working in the Sequencer and Song Editor:

Cheers, Clock. :cocktail:

It’s been a while, but now I am using Numpy Arrays to store the Piano Rolls and PyPianoRoll Python lib, along with MatPlotLib to plot the roll out to an image:

Then, I have modified the Generator to process all the notes from a Track into a mixed sound with delays used to offset the sounds, so all the sound for the track is now one sound file, that can be added to the VSE. The Generation System is switched off for normal play, so you can stops start/rewind the animation.

Cheers, Clock. :cocktail:

Update, now I have a White Noise Generator, generating white noise to specific settings, the one second burst can then be looped, modulated and pitch-bent as in the node tree below:

Here is a sample sound output: white_noise_6.flac.zip (152.3 KB)

The Green node at the top outputs the waveform image, when connected.

I am using my Calc node to get the LFO frequency, currently set to C-2, or about 4.0879Hz.

The white noise is generated from a Numpy Array of mixed and random frequencies, phased with sine and cosine operators. There are 24000 rows in the 1 second Array, giving a large number of possible frequencies to make the noise from. I have tried up to 96000 samples and it still works very fast!

Cheers, Clock. :cocktail:


I rebuilt the Generate node so it now has a White Noise Oscillator mode:


And I now have a Flanger:


Which dissembles the sound and then applies a sinusoidal delay shift to a copy of itself to get the right effect. You can set the Phase Length, Resolution and Offset in the node:

And the Player node now adds saved sounds to the VSE:

In this shot, the system is not running (nodes are green, therefore idle) as the saved file has been loaded to VSE and this now plays the sound, rather than the generator nodes.

Cheers, Clock. :cocktail::cocktail:

I have also made a “Doppler” node, this pitches the sound up by a value at the start and pitches it down by the same value at the end according to a cosine variable:

Here I am pitching up three semitones at the start and down 3 at the end. using a negative value for semitones in my Calculator node produces the opposite effect of pitched down to start then pitched up at the end.

Cheers, Clock. :beers:

I have now used my new-found knowledge to create a node which records any sound output from anywhere on the node tree and for any frame range, to a file and then adds this to the VSE:

In this example I have recorded the entire Drum track. The Switch node inputs either the drum objects, or nothing, if nothing the drum objects are not used to make a sound, but the track in the VSE will play. I have tried this for the entire project’s sound and it seems to work OK - I will test this further though.

Now I can output the entire project to a sound file Yippeeee! :grin:

Cheers, Clock. :beers::beers::beers::beers::beers: :toilet: :face_vomiting:


So here are three test files, from the bottom, drums, synth/bass and combined. There are 44 synth notes, 99 drum notes, bpm is 36, time sig is 4:4 equates to around 18.3 seconds. It records in real-time and takes around 56ms to write the combined output file.

Once in the VSE, it plays fine, although I did notice some distortion & clipping if I have the volumes too high that don’t sound bad in Blender, but do in the output files.


Complete Node Tree:


Morning Clock!

Oh hello there Clock!

I thought I would tell you that these DAW nodes are on my GitHub if you want to play with them.

Thanks for that!

My children have threatened me with “Old Peoples Home” if I continue to show signs of senility, oh well, maybe I will revise my will…

I have made changes to the MIDI Bake node, as I have been unable to get the 2.79 method of storing FCurves as Collections in the blend file to work properly in 2.8. I have not had any help on this from AN devs, so if anyone knows enough about Python and Blender keyframeable (is that a word?) collections, please let me know. New version in on the Old-Git-Hub also, in the MIDI folder:

It no longer bakes the FCurves, just creates the control objects.

Cheers, Clock. :cocktail:


I should greatly appreciate any serious Python developer reviewing these DAW nodes to comment on any improvements to the code.

This code might be useful to anyone who might help me with this, this is the collection definition:

class MidiNoteData(bpy.types.PropertyGroup):
    noteName  : StringProperty()
    noteIndex : IntProperty()
    # This value can be keyframed.
    value     : FloatProperty()

This bit populates it:

            # This function creates an abstraction for the somewhat complicated stuff
            # that is needed to insert the keyframes. It is needed because in Blender
            # custom node trees don't work well with fcurves yet.
            def createNote(name):
                dataPath = "nodes[\"{}\"].notes[{}].value".format(self.name, len(self.notes))
                item = self.notes.add()
                item.noteName = name

                def insertKeyframe(value, noteIndex, frame):
                    item.value = value
                    item.noteIndex = noteIndex
                    self.id_data.keyframe_insert(dataPath, frame = round(frame,2))

                return insertKeyframe

            # Process EventD to make F-curves.
            for rec in eventD.keys():
                ind = getIndex(str(rec))
                addKeyframe = createNote(str(rec))
                indV = True
                for i in eventD[rec]:
                    frame = i[0]
                    val = i[1]
                    if indV:
                        addKeyframe(value = 0.0, noteIndex = ind, frame = (frame-self.easing))
                        indV = False
                    addKeyframe(value = val, noteIndex = ind, frame = frame)

The problem is that it always returns 0, not the keyframed value…

1 Like

I have modified the system so the piano roll scrolls through a reader:

And added a function to my MIDI Bake node that writes all the notes to the piano roll directly from the MIDI CSV file:

I need to make the function add a material to the notes, but that can wait for now.

I am now off on holiday for two weeks of rest and relaxation, drinking and eating to excess. :laughing:

Cheers, Clock. :cocktail::cocktail:

PS. Nobody knows how to sort my little coding problem then… :cry:


OK, given up with Animation Nodes for this, too complicated now, not geared up for Music, no Sound Generation, can’t get it to install on macOS reliably, blah blah blah…

So I am writing my own Node System, details here:

Still a little way to go for a release, but I am getting there!

Cheers, Clock.

1 Like