Status: Siberia Complex, BUV

(Cognis) #1

There has been a lot of work done on Siberia Complex and AMPS, but since August, there has been a feeling of running in circles. I thought I’d upload the last test animation (made in late August) to see if the concepts worked on will spark some interest, hopefully feeding back some inspiration.

There are two problems that have held the project back, and makes the original hope of getting some sort of finished work out in 2007 impossible. For one, our financers suddenly began ‘forgetting’ to pay their end of the deal, and in July, they apparently folded :eek: That means current work is done out of my pocket (mainly) and out of the pockets of a few participants and supporters. I still have control over all the rights for research produced, thank god!

The second problem is that some walls in Blender has been run into. This is not so much that Blender is lacking anything; the walls would be hit in any 3D application. It is more a testimony to the unusual nature of Siberia Complex and AMPS. For those who do not know, AMPS (Accelerated Movie Production System) is a project to produce tools that allow very fast movie work in Blender, and Siberia Complex is the guinea pig for the work.

The Late-Agugust test animaiton can be found here. It is a BUV (Butt-Ugly Version), full of flaws in both modellign and animation. What is unique about it is that nobody actually animated it; the models were created, and AMPS scripted the entire animation side, based on a simple manuscript! Thus, what you see is the result of a manuscript with lines like “Max ascend stairs”, “Max look at Xesi” and “Xesi stand up, walk to door”, etc. (Max and Xesi are the main characters). The program does the rest. Also, there is no sound, since the voice generator has no phonetic dictionary yet, so the dialogue would just be garble-di-gook :slight_smile:

The problem (besides our frustrating ex-financers) is that the limits of what passive scripting can do has apparently been reached. All actions performed by the characters are based on fixed input standards. For example, when the characters sit down on the bench to talk, they are conforming to pre-constructed rigging for sitting on that particular bench; make the bench lower, and they will be sitting on air! Characters can grab things, look at things, point at things and so on, aiming at whereever things are (an early test had a box hanging in a room, and the character would go to it and pick it up, even if you moved it a little). But making them more aware of their surroundings, so that they might avoid obstacles and figure out how to lean back lazy on a bench without pre-generated poses has turned out impossible, or at least very difficult / impractical!

I have been testing the game engine for use in this. It seems to allow the solution of many problems with character-environment interaction, but it has its own peculiarities. I am currently working on realistic walking, with ground sensitivity; the related thread. The Record Game To IPO function would make it possible to do really impressive things this way. If it can be made to work, that is…

I am hoping that the idea of having functions incorporated into Blender like “Character grab Item” and “Character stand up, move away from table and walk to door” will spark some ideas out there. Whatever your thoughts on the matter, please let me know; I’d rather drown on inspiration than have mine dry up :wink:

0 Likes