blendertronics... just a thought

long rant time…

after trying to find a way to use device inputs in blender… and google didn’t help, I am writing here to attempt to find out if there is a market for this type of thing, and how easy it would to do in blender, weather its worth me learning python and whatnot , in order to make this addon.

effectively the main aspects of this addon would be called “input units” (a name i made up just cos i needed a name for it)

so… what is an input unit. effectively an input unit is just like a keyframe driver, only it takes input from a connected device such as a control pad or control stick

as part of the user preferences perhaps a new TAB you would be able to assign a controller device, as detected by the operating system, and set up a range of options for each device input, such as stick axis, button presses, and throttle sliders, these would then each have their own options such as range, sensitivity curves falloff and deadzones and also a “name” variable, so the user can name the input manually, otherwise blender would just take the input names directly from the device. these name variables will be the name of the “input units” a system used to assign the controls

now we have a simple way of setting up an input device, we can do many things with them…

most importantly we can press “I” over a keyable value anywhere in blender and assign that value to be represented by an input unit so as well as options for “insert key” and “driver” we would have the option “input unit”… now how do we use this to animate…
firstly we turn on autokeyframing by the timeline or user preferences, and play the viewport animation with the timeline or alt+a…
now as the animation plays, IF an input unit is active we write a keyframe for each frame, as we play the animation.
when the last frame is reached the “input unit” is deactivated, so the second (looped) playthrough is showing what you keyed with input units in the first one otherwise you will be MAKING animation… i hope this seems simple…

so to animate you simply activate the inputunit, and press play, then use the control device to manipulate the keyframes directly. when you reach the last frame, the input unit is deactivated and the playback is showing the previously written keys, (just like the dopesheet editor does)

another cool thing we could do with input units once implemented is things like camera manipulation in the viewport, it might make for very interesting sculpting or mesh editing if we can rotate the camera with one hand on the control pad and use the mouse with the other, maybe the ability to map keypresses to controller buttons or stick axis would allow for the use of the shift and control keys during sculpting.

the point in this , i hope, would be to make thngs like facial animation more intuitive and faster, for example setting up a face rig using bones, you could use the input units to rotate and transform these bones, and with a little practice on YOUR particular setup… making facial animation feel much like controlling an animatronic puppet, things like breathing keyframes could take just a few minutes to set up and you could key in realtime effectively,

some questions. hopefully answered.

why use a new tab in the properties screen…?

because it would mean you can save your controllers input easily and save it as part of the default blend file on your system. meaning once you set it up , it can be used without effort. you could then simply tweak this setup to match your needs, like setting input range and falloff…

whats wrong with keying by hand…?

nothing at all, its the best way to do it, this is not intended to replace keyframing by hand, but for some things, this system could be very useful and very fast to set up.
think, setting up a control stick to move head movement and the other stick for eye movement, press play… make them doo stuff…release controls to stop recording keys… press stop, fine tune, done

would it be complicated to get this into blender?

I honestly don’t know enough about blender to answer that but I cant see how it could affect TOO much besides the manipulation of the viewport and the ability to assign device inputs as control assignments for things like viewport navigation… the main part , keying animations like how a driver does it would be pretty much done already. and blender can already use mouse movements to add keys in the viewport during playback.

what about using multiple controller devices?

I dont see why not if the operating system can let an app use more than one.

isnt that just drivers with a different type of input??

no, one subtle difference would be the fact that these would only be used to bake keyframes, once autokeying is turned off or the input unit in question is deactivated, the keys are played back as normal from the dopesheet/action editor, therefore, this effectively disables the input unit altogether and allows the “underlying” animation to be shown correctly… think, baking a keyframe drivers information into a set of keyframes, turning the driver ON and off as we need to

what could this be used for???

realtime keying of almost any property in blender, everything from materials, textures, physics attributes, particle info, transforms (loc rot scale), shapekeys, NLA-blending… pretty much anything that can be keyed, could be recorded in realtime
things like animatronic style rig manipulation, car steering / movement, plane flight style animation… in fact anything where you need a high level of control , but you want to animate it faster than key by key animation…

when will this be released?..
when someone who knows how to make this happen, reads this, and decides it would be useful to them, personally I have no knowledge of making addons in blender. so it would take me a long time to learn enough python to be able to make this happen , if its even possible

this is all just daydreaming imho, i have no hope of such an addon being developed, but maybe one of our more seasoned developers/coders could whip something functional together in a weekend.

another though would be to allow the input to directly refresh the viewport, so to be a basic tool for posing or manipulation of objects,… use the sticks to pose a facebone setup, then press “A” on the controller to make a regular keyframe…

these are just ideas… if a system like this were in place it wouldn’t take long for people to learn how to best make use of it.

i just skip read so, i dont know, i think this is what you want.
python is probably the the best way to implement this.

“device input, such as stick axis, button presses, and throttle sliders,”
i dont know if this works, but probably does.
http://www.blender.org/documentation/blender_python_api_2_66_release/bge.types.SCA_JoystickSensor.html#bge.types.SCA_JoystickSensor

" and press play, then use the control device to manipulate the keyframes directly."
“another cool thing we could do with input units once implemented is things like camera manipulation in the viewport, it might make for very interesting sculpting or mesh editing if we can rotate the camera with one hand on the control pad and use the mouse with the other, maybe the ability to map keypresses to controller buttons or stick axis would allow for the use of the shift and control keys during sculpting.”
bpy.data.objects[‘Armature’].pose.bones[‘test_bone’].rotation_euler[2]=50.0

“another though would be to allow the input to directly refresh the viewport”

def PreRender(self):
     # read joystick
     # modify
     # set bone, camera, light
bpy.app.handlers.scene_update_post.append(PreRender)

^ if that works, its somewhere to start.
mabey more joystick sensor info in the python forum.

Sounds kind of like this: http://wiki.blender.org/index.php/User:Aligorith/Record_Tool
I’m not sure what happened to this though. I found it by accident when searching for something I wanted to make an addon for. :stuck_out_tongue:

This with many small blue tooth position sensors = puppets for blender :slight_smile:

imagine if you could scatter 1000s of sensors in clay that are smaller then a pixel on your screen,
and somehow get the data back in real time, you could “model” the old school way

just a thought.

Dirty-Stick : I’m clueless about this kind of thing, and i may be wrong but I think that link points to BGE uses for inputs… would this work for animation,? I checked the python forum and it seems a few people have been looking for this feature or something similar, and their posts often are unanswered, unless its a BGE technique… noone seems to speak about control inputs in in blender , but without the game engine involved… but thanks for the info , it gives me a place to start from :slight_smile: … time to learn how to make addons…

Cyaoeu… thats exactly the kind of thing i think would be a really useful feature to have for fast animation, just like using mouse record but with a stackload of mice… and hands… lol
whatever happened to it?.. did it just kinda fade into obscurity? or were there technical reasons why it wasn’t pursued further…

BluePrintPhantom: Maybe a point cloud would be easier on the wallet than thousands of position locators :smiley: I think I saw a video of some guy wrote a script and using an xbox kinect had point clouds in reatime, running in blender, then used meshlab to make them into usable meshes… this was a while back though and I’m not sure how usable it was.

ahhh… alas it seems its something beyond my scope and abilities then… BGE is one thing, but, and correct me if I’m wrong, I think its something blender would need shoehorning in at a lower level… maybe someone with more skills could figure it out, I could see it being a really useful feature, maya 3ds max and a lot of the others have similar addons/plugins :frowning: