I’m actually using the game engine to animate a character for a theater performance. The game engine is controlled remotely threw OSC (network) via python. I controlled different kind of things in GE via OSC :
- start character and objects animation (each action of the character during the performance, like talk, or take an object, or anything he plays is trigg by a human manipulator)
- control camera and character position (used for traveling in the spaces where character evolve).
- control head of character. At any time we can correct the head of the character on the animation via a custom python script. (actually still a bit buggy, but didn’t loose hope to finished it! ^^)
- face expression (shape key) can also be corrected in realtime via python.
I have few questions.
- I notice that if I threw to much information per second via OSC (more than fps), it seems that blender construct a kind of stack, even if I drop the information when it’s currently treating it (controlled by a system of flag). Enable all frames allowed me to send more information, actually the only solution I found to go threw this problem, but I didn’t like much this solution. Any other idea?
- actually, the time is remotely controlled. But I’m not sure it’s the better way to go. I’m actually moving all my time motor in blender. Two solution : make a python time controller and keep my action actuator by property or change my action actuator by play or loop type. I like the second solution but if I enable all frames, the animation is playing really too fast. Any idea?
- in the case where I keep my action actuator by property type, I have a bug on the blend between two action. It seems that he makes the blend between the firsts frames of the current action and the firsts frames of the previous action, not the lasts. Changing type to play or loop will correct the problem but I have the problem on point 2).
I think it will be all for the moment.