Digital Puppetry / Machinima using the Game Engine

For the past several months I’ve been working on a digital puppetry/machinima application called Panda Puppet. I was using the Panda 3d engine, but since then I’ve been using Blender more and more and so far I’m very impressed with the latest version of the game engine and I’m thinking of using it as the basis for this program instead.

The part I’m having trouble wrapping my head around is figuring out if it would be possible to write a python script that would enable you to record a game and then somehow export or save that data to be used in a .blend file that could later be tweaked and rendered as a regular piece of animation.

Has anyone ever attempted anything like this?

OK, so I did some more digging and I think I’ve answered my own question…I came across Oliver Blinn’s Game2IPO script, which seems to be what I need, but I would still love to hear from anyone who’s played around with generating animation in real time using the game engine and/or used it for Machinima purposes.

Blender actually can internally save physics calculation as IPO, but I don’t think that includes player-controlled motion or armatures.

As far as machinima, you could use a video-capture program to film the output of the game engine.

O.K. If you are using the latest 2.41 you can select “Record Game Physics to IPO” from the “Game” menu. This will record the actions of dynamic actors to an IPO. So in your game button window select “Actor” and “Dynamic” for your .

What you want to do then is parent your armature to this actor. When you set up your logic bricks for control they are applied to this actor as well, not the armature. Then simply press “P” in the 3D window and use the keyboard and/or mouse/joystick to conrtol your character and your locations and rotations of the actor will be recorded as an IPO when you quit the game engine.

Now as PlantPerson said your armature motions such as a walk cycle will not show in the animation. What you need to do is use your NLA to set the actions up to coincide with when and where you would like things to happen. Then simply render and enjoy.

I have only used this technique to have a character walk around a room and also to have a ATV drive around a dirt course. Both pretty basic animations requiring little more than locations and rotations with a walk cycle.

Don’t forget to give your materials a texture and then apply the same image used to UV map your characters to this texture.

regards,

honeycomb

Hmm. That’s really interesting. I’ve wondered why Blender isn’t being used for machinima. It seems like the perfect tool. Maybe this will get it started.

Thanks guys, that’s very helpful. I’m going to try to download some game assets and do some experiments over the next few days. I think I’m going to try and work out a good method for doing this with a minimum of python scripting first and then after that try writing and then try to write a script that will make the whole process a little easier since a lot of people finder Blender intimdating.

Is it true that the game engine does not currently support facial animation or network play?

I don’t know about network play. I think it’s being worked on.
I think you have to do facial animation with bones, which isn’t as bad as you might think. I have a character that blinks, frowns, talks, and looks around, all with bones. I have about 6 bones for the mouth, one for the pupils, one for the eyelids, and one for the head. Setting up the bone vertices is a bit of a pain. Something cool you can do if you don’t save bone position is to mix animations. Like blink and talk, etc.

Oh that’s not so bad, I think that will work quite well for what I have in mind.

I am currently thinking on using shape keys along with a faked softbody demo I posted to do a facial animation in the game engine. Blending heads and faces is something I need to spend more time on.

It simply switches between objects using the add object actuator and a simple python script. I have not tested this on more than one instance though. I wonder what the performance would be comparing multiple armatures to multiple switched objects.

My thinking is that you need the shapekeys for your animations and cut scenes anyway. It would seem that animations are contrary to Machinima but truly it would only add value perhaps as a bridge to hard core gamers from this artform or vice versa.

Here is a link to an animation using the method described in my previous post. I would like to apply the technique of object switching on Secretary’s skirt.

honeycomb

You’re probably on to something there…what I think is really needed for high quality Machinima/digital puppetry using the game engine is having a library of pre-determined expressions and actions - just like any game character has - that the player/puppeteer can trigger via a keyboard or some other type of input device. Sort of the way a musician plays an instrument.

…what I think is really needed…is having a library of pre-determined expressions and actions - just like any game character has

Yes exactly. I have a Short Clip of actions that I plan to trigger with the game engine. These are also many of my first real animations using blender as well.

It would still be necessary to redo the armatures in the NLA and animate unless you specifically want a physical simulation. Which leads to some sort of AI scripted. There is a nice demo in the Blender Gamekit for a fight sequence and short logic brick AI loop.

I believe Blender and Machinima is a strong tool for developing content for itself and games.

What codec does that video use? I haven’t been able to play it back yet unfortunately.

I used FFmpeg’s libavcodec without an option, sorry.:expressionless: I updated the link, it should be mpeg-2 now. Is it safe to say my other clip failed as well?

Sorry, still can’t get either of those to play unfortunately.

Try the VLC player.

Thanks for the tip, the VLC Player did the trick.

Nice animation Honeycomb, that’s exactly the kind of actions I was thinking would need to be triggered. One think I have been thinking a lot about is how a user would get the greatest degree of control over a character. I think the best way to do with would be to use two separate input devices.

For example, you could use a multi-button joystick with one hand to control the movement of a character throughout a scene and use the joystick buttons to trigger different actions, which would be assigned to different buttons in advance. Your other hand could control a device like the P5 glove which would be used for lip sync and facial control just like you were using a sock puppet.

The Jim Henson Company uses a similar type of interface as part of their animatronics/real-time 3d control system.

But the key thing with user control would be having controls that are completely customizable so they could be as simple or as complex as needed. A gamer could set-up everything to work off a game controller, a puppeteer could set it up so they can use a dataglove, etc.

Does the GE currently support peripheral devices like game pads at the moment? I know for anything unique like the P5 there would probably have to be some kind of custom script written.

I can reccomend a couple of demo files you might be interested in.

Go to the Blender download page and grab the regression files on that page.

For scripting AI check out “RvoFighter-24.blend” from the 2.25 game demos.

For facial animation look at “controller2.blend” from 2.40 regression archive.

There is even a file around for custom keyboard mapping somewhere.

They are a bit complex and I do not fully understand them yet but that is the direction I am headed.

There should be a python module which can provide access to serial or other ports. So I do not see why one could not attach whatever type of controller and run from that. Something like a animation synthesizer. I have even heard talk of a MIDI interface to Blender. After all a controller is a controller no matter what style you prefer.

regards

honeycomb

Does the GE currently support peripheral devices like game pads at the moment?

I think it just has joystick at the moment, but it looks like it assigns an almost unlimited number of buttons. I can see how a data glove would be more useful for something, but I don’t quite understand a gamepad. Good ideas though. I’m using the random mouth openings to match text myself. I got the timeing to match sentence length. :smiley: I don’t know if this relates or not, but calling actions by script is cool when your doing long dialogues. Like “Are you sure?” ,frown, blink,hands on hips, “I don’t believe you.” Action blending is something else I want to experiement with. What happens if I blend a talk with a frown or a smile.

Thanks for the demo suggestions Honeycomb, I’m going to check those out as soon as I can.

Good to know there’s multi-button joystick support. I think a script would have to be written to accommodate a dataglove, unless the GE supported motion capture data.

I think a dataglove support has been added for a psychology profs research but via python.

GHOST might get dataglove support as part of the refactor.

LetterRip