Livecoding in Blender - fluxus, animata, etc...

I have recently been turned on to livecoding - search fluxus, supercollider, animata, others. I worked with Processing in the past to create some 2d animations with sound inputs driving them, but after seeing some videos generated with fluxus and animata, I thought how hard would it be to drive 3d armatures as puppets from inputs? I don’t know too much about the hooks into Pydrivers and possibly using Cuda for real-time rendering or what sound input support can be had through a Python script, so I thought I’d put it out there. My goal would be to be able to do what Animata ( http://animata.kibu.hu/ ) does except with 3d and Blender. Any thoughts?

Rob

There is a Google Summer of Code Blender project to update Blender inputs, perhaps that is one piece of the puzzle to be looked at.

I thought there was realtime capturing of mouse input in Blender for animation.

Another option, is external software driving some sort of proxy real time animation visual feedback system that also captures the movement to a file. Once you are happy with the movement in the animation system, import the file into Blender and assign the movements to objects.

Regardless, it is going to need some knowledge of python and the Blender animation system.

I would think it would need to use the BGE for real time rendering on the fly. There are different types of performances, but from what I gathered the coded part is only to deal with inputs and/or algorithms to do video to sound or camera inputs or whatever you can think of. For instance, Pure Data uses GEM, an OpenGL library hook, so that Pure Data can algorithmically supply geometry to go along with the music.

My plan would be to get realtime rendering where data from the sound input - volume, frequencies, etc… - could be put into IPO curves or vertice or object locations or scaling.

Rob