OSL Livecoding with audio

I’ve been away from Blender for a few years, and I just discovered the whole world of OSL and OSL in Blender!
I have always wanted to duplicate a livecoding environment similar to fluxus or shadertoy (also Overtone and Shadertone).
I found a great python dsp library PYO, and I was wondering how hard it would be to livecode in one of the following ways:

Open a 3D view, text view, node view and python console in Blender and import PYO. While livecoding some OSL goodness with a constant update like fluxus (using time of course in OSL to animate the shader while coding), and then either jump to the python console and livecode some audio via PYO, AND have the OSL shader react to one or two of the audio parameters (amplitude or frequency for example).

Does anybody have any thoughts or foresee any issues? Has this been done in some way? I saw some audio visualizers, but I am really looking for something like Extempore, when Andrew Sorensen livecodes some opengl and audio together.


In OSL, you wont be able to use it in real time. But if you just want to render an animation, OSL can access xml structures, and you can analyse the audio and write your own xml with the data to draw each frame, using python, java, or any other language.

Thanks. I was looking at the Thomas Dinges video on converting GSL to OSL here: https://www.youtube.com/watch?v=4LQXjIDWtz0
At around 8:20 he slides the time variable and the shader is animated. Is there a way to automate or write the OSL so that it updates automatically each second or frame? Can you pull data from the shader’s output? I may just have to write what I want from within Blender. Hopefully the API has not changed too much since 2009 for the basics.