I’ve been away from Blender for a few years, and I just discovered the whole world of OSL and OSL in Blender!
I have always wanted to duplicate a livecoding environment similar to fluxus or shadertoy (also Overtone and Shadertone).
I found a great python dsp library PYO, and I was wondering how hard it would be to livecode in one of the following ways:
Open a 3D view, text view, node view and python console in Blender and import PYO. While livecoding some OSL goodness with a constant update like fluxus (using time of course in OSL to animate the shader while coding), and then either jump to the python console and livecode some audio via PYO, AND have the OSL shader react to one or two of the audio parameters (amplitude or frequency for example).
Does anybody have any thoughts or foresee any issues? Has this been done in some way? I saw some audio visualizers, but I am really looking for something like Extempore, when Andrew Sorensen livecodes some opengl and audio together.