bge + coding workflow

First of all I’d like to apologise if this is this is answered in other posts, but I’ve searched the forum for days now and I can’t seem to find the relevant information.

Some unimportant (and uninteresting) background: so yeah, I’d like to start building up my uber world famous most awesome game ever but I have no experience with game engines (and game building for that matter…). I do have programming experience and know a thing or two about algorithms (at least their names :smiley: and read a lot of related articles). However, before I can even begin to think about my uber world famous project… I find it very difficult to figure out how to get my ideas inside the engine… in other words, how things fit together. This wouldn’t be that big of a deal if I were to work with code only because I’m used to working with code and I would probably figure out my style along the way… but:

Do any of you experienced BGE guys be inclined to help us noobs find our coding work flow (pelase)?

In short, how would you use programming patterns in BGE? Where do they go? Do you make a “game empty” type of manager object and add some “management” code in there? How would this master “game empty” interact with other objects? Where would the AI code go? Is the observer pattern useful? How would you keep your code decoupled (would you even have to, think about the command pattern)? Do you have a “master” script / module that initializes everything at the start or do you have an initializer script / module per object? etc.

I would like to hear all about your coding work flow / style / management. I think it will be really instructive for us noobs to get us past the basic “character moved by keyboard push fancy box things around” and into more advanced implementations.

Thank you.

Think ‘Object’ and ‘module’

eg: a tank called Abram. This tank will have its own property such as speed, weight etc.
Abram will have movement code eg:
The tank head will have its head movement eg:
… so as the turret. It also going to have AI eg:

Other tanks can have the same method using the same code with different individual properties (variable). Ships, tower can ue the same turret and targeting code too…

Instead of having sequencing/flow programmers mindset, try to think that everthing is executed at once (eventhough actually it is executed in sequence)

So basically what you’re saying if I understand correctly is: python modules + logic bricks (per object) = object + behaviour. And instead of thinking OOP and general programming style, let the BGE take over and think about the interactions. But then, do all of those “cool” programming paradigms fly through the window?

Taking the AI as an example. This would mean to create the “brain”, then attach it to the object(s) and let BGE manage the rest so that no other programming patters are necessary (except for the AI algorithm itself of course). Is this interpretation correct?

One other (unrelated to this subject) question: is there a way to access properties from actuators / sensors (without python)? Say for example I would like to have an “Edit Object” actuator but instead of “hard-coding” the name of the replacement mesh, have it look for some object property and use that instead. I guess this is the other thing I find difficult. It’s best using the logic bricks because they are implemented in C++ (correct me if I’m wrong) so they are a lot faster than using Python modules, but then you have to “hard-code” the logic bricks which makes it difficult to manage, so ultimately they are very restrictive and not so useful for complex projects.

Another example: let the user change the key mapping… if you use the “Keyboard” sensor with specific keys, then this basically means it is “hard-coded” and no way to change the key map (again, without python). It seems to me it would be really beneficial to have these sensors and actuators be able to use properties directly. Or is there a way to do it and I’m rambling pointlessly? :slight_smile:

It’s all about … abstraction. What is meant by this?

Well, in a game, things are often fairly similar. You have a couple types of car. They may handle differently, but they’re all cars. And so it makes sense that the code should be reusable. However, for many people unfamiliar with programming, they copy-paste a script and tweak values in it. One script for each car. Thanks to Object Oriented Programming, this is actually unnecessary. So we can ‘Abstract’ the porche, ferrari, toyota … into generic “cars.”
Thanks to subclassing, this can be taken heirachically. Be warned, some people take OOP too far. Ie a car isn’t tires, and thus shouldn’t subclass tires. If you want to go into that level of detail, it has tires, so they should be attributes of car.

Another important thing is modularization. Many many things across a game require a simple “container” health, energy, ammunition,… So if you code a class for handling input and output from a container, you can stick it in attributes everywhere, and presto, things get easier.

Obviously not everything can be handled by these objects, things like loading a game. it’s not an object. How about keypresses, or the game win criteria? How do I do these?
Well, I do have a master object, an empty sitting in the middle of the scene that runs, every frame, a bunch of functions. This handles things like loading the game, score, keyboard and all the game elements that don’t have inherent objects to attach them to.

An interesting exercise is a keymapper. Many people hardcode keyboard layouts into their project, which is, to many people, the only way they can see to do it. I prefer to have a dictionary that translates keys into human-readable English, ie “Forwards” instead of “W.” I wrote a script that does this, and dumps the result into bge.logic.activeKeys. You run it every frame to update it, and suddenly doing keymapping, as well as using a key is elementary.

Programming is not hard to learn, the syntax will only take a month or two. The concepts of high level programming take a lot longer.

I don’t have problems programming (that is… general programming), but inside the BGE (and I guess game engines in general), things get managed for you: physics, interactions, messages, etc. I am in the habit of managing everything “by hand” that is designing the MVC, interactions etc. low level stuff, so to have a system that does the managing for you, I guess takes some use to and that’s the point of all this, to better understand how to organize code and where it’s supposed to go.

I’ll have to think about your suggestions, thanks.

Oh yeah, and hope I don’t get penalized by moderators because I started this thread in the resources part of the forum… that’s cause I’d like it to grow into a “coding guidelines” type of thing. Hope that’s OK with you :).

Everythings is managed for you

Not really. The only things I consider fully managed for me is graphics and physics. Everything else I put my own wrappers on top of, even things like lighting. In the MVC model, blender handles the V (graphics), most of the M (physics), and very little of the C. I find the line between M and C relatively blurry. A PID controller is a controller, right? But if I code in gravitational attraction between bodies, thats a model. And yet both can control force as a function of distance. Guess it’s just … more abstraction!

So, code organization. How do I do it? well, it all depends. I tend to separate things out into:

  • Game level functionality (loading, saving, win criteria, keymapper)
  • Player code (movement, etc)
  • AI code (movement, health)
  • Vehicles/objects (these can then have attributes of either AI or player)
  • Support functionality (health modules, PID controllers, base level classes mainly)

The only modules the BGE uses directly are the vehicles/objects and the game-level one.

I did report this thread to the moderators to suggest moving it to the support and discussion. Generally you only put things in the resources thread if the solution has already been found! So when you have a good understanding of workflow, then is the time to write up about it in the resources forum.

I think I have an idea of how to implement some of this stuff. I’ll do some experiments and post here. Meanwhile would be helpful if others would throw their voices around and share how they manage their coding. (specific) Examples are always welcomed :).

** moderation **

The resource sub-forum is supposed to contain finished resources. Even if you use this thread to collect ideas for a coding guideline it is better you create a new thread were you present your conclusions.
** end of moderation **

The questions are very valid. But, if you think of a project in terms of “programming” it is most-likely doomed right from the beginning. Typically you start with a general design (top-down-approach). Define your general goals. This way you can later check if the game is really doing what it is supposed to do (this applies to nearly all applications).

Indeed you can choose the button-up approach and start with a basic prototype evolving into something bigger. I recommend this way right at the beginning. It will give a quite-large experience boost. But most-likely it will not result in a great game. So start small.


These “cool” programming paradigms are as the name says programming paradigms. In a game the behavior (= logic = programming) is just a small part of all the necessary parts. When you describe the behavior you can apply these paradigms. Overall you can look for design paradigms.

For example the MVC paradigm fits pretty well in the architecture of the game. You can develop the game model which does not need the BGE at all. The controller and view implement the interaction with the Player via BGE (or any other platform).

But you do not need it at that level. You can use it at specific aspects of your game. E.g. the menu or object selection or even your example the “character’s brain”.

The point is, the BGE is already there. It allows you to think in “objects”.
It provides an event system (sensors) and a “pseudo parallel” processing (actuators).

It is not highly dynamic, but dynamic enough to fit the most requirements

Oh yes, they are faster. But faster always depend on the situation. There are some benchmarks in the resource section.

If you come from programming the logic bricks feel a bit static. The point is … they are static. In 90% (raw guess) of the cases this is exactly what you need. This means all of these cases are very simple to setup.

When you need it more dynamic … you can create your own custom logic brick which add this dynamic behavior (e.g. configuring another logic brick) or implement custom operations. The programming paradigms will fit in here very well.

Btw. there is nothing that prevents you to create multiple custom bricks and distribute them on several objects ;). [I recommend to avoid creating a “god brick” which tries to do everything. This would bypass the existing event system. There are other game engines that support that option much better]

I developed an “input mapper” some years ago. It is pretty nice. You can re-configure all input (mouse, joystick, keyboard) in all combination. You can save and load the mapping- Unfortunately it is not that easy to use. The whole game gets a dependency to this system.

Finally I discovered a much simpler method, which can be applied to any game that does not bypass the individual keyboard sensors. You scan all objects for keyboard sensors. You can take the name as “purpose”. e.g. <space> -> “shoot”.
This way you know what keys are necessary and what they are mapped for. You can even identify non-unique mappings.
The drawback is, that you can’t exchange a keyboard press with a mouse key press. But this is a very small limitation.

You can show the results of the analysis to the user. With a nifty GUI the user can reconfigure the mapping on the fly. (Btw. this is a good example of MVC ;)).

My solution just allows to analyze the current keyboard (see: Analyzer- analyze your scene on the fly (2.49-2.5+)). I never finished a GUI.

When you create your own game you can decouple input from purpose. E.g. by allow <WASD> and <arrows> and <mouse> at the same time or in any combination even on the fly. In that situation I create separate objects, that send a message on specific input.
E.g. <A> -> “turn left” and/or "<left arrow> -> “turn left” and/or <mouse over button + click> -> “turn left”
The operation listens to the message “turn left” rather than the specific input sensor.
Example: WALL-E

You need to decide before hand if you want to use the build-in mechanics or not. You do not need to use the Physics Engine. If you do not you need to create the physics behavior by your own. This is an high level design decision.

Anyway, I suggest you read some of the guides in my signature. It should help you to get a feeling for the BGE. It is a framework. You get the best results if you understand how to use it.

…from my experience, mistakes I’ve made in BGE + Python.

  • Unfriend logic bricks!
    I tried to avoid logic brick. Excecuted at the beginning and ‘always’. Should add friend.
  • Go low level as low as I can
    I tried to reinvent everthing. To reinvent everyting, controlling loop cycle, it is better for me to use C or ASM.
  • Dump all codes into one thinking one file is nicer/easier.
    Called once, execute ‘always’. Once Blender got updated, most codes won’t works. I have to trace and fix it almost line by line and lost track especially when I have thousands of lines of codes. Hard to track.
  • Hates multiple file (improper management)
    What I should do ease development is to seperate one file into multiple object and action. So I can use and reuse objects[n].blend and actions[n].py for other project at the same time optimizing previous existed (libraries) of objects[n].blend and actions[n].py, minimizing ‘var global’. Add lots of individual properties (value) by object.
  • Putting perfection (subjective) over objective.
    Not able get it done. What I should do is to use what I have (ready made - find out how to get it done) to meet the objective. Polish it later - .blend by .blend and .py by .py.


Create Idea

Define events that initiate parts of idea

this is your logic bricks,

hook to actuators or python or both

rinse and repeat.

also, there are things you can do with run armature, and in armature pose mode, copy scale, or copy position or copy rotation,

also -

object based systems


take a default cube, scale it down by .5, move the origin to the middle of any face,

now if you scale it, it will be that length



Logic is powerful, use it to your advantage.

keypress 1---------------and------------value+1
if value min:0 max:99–/

keypress 2---------------and--------------value + (-1)
if value min:1 max:100-/

if value is changed------python



offsetParent.blend (437 KB)JumpLogic.blend (434 KB)PlatformLogicAndPython (2).blend (473 KB)

Thank you for all your help guys. I think it makes a huge difference.

I do not have your patience to respond to all that you wrote, so I’ll just have to say thanks and also great job at your guides! I hope to achieve a better understanding and give back to the community with all of your help. I did look through your guides (not the library yet though… probably I should have before opening this topic). The guides are great, but I get the feeling they are very “specialized” as in, this is “the simple way” to do it, but not necessarily the “complex project friendly way”. But saying that, next thing on the list is to go through more complex “library” examples and steal ideas from there as there are more complex evolved examples. Thanks.

I am always trying to “follow” some guidelines to make my life easier. I only want to comment on 2. That’s my problem, because I’m used to building stuff from bottom up and I have no experience with game engines, I guess I’m a bit lost. I want to let it do the work for me, but then I’m faced with not knowing where things should go exactly :).

I’ll be sure to take a look at your example files.

Little by little I’ll get there. For the time being I’m gonna do a bunch of experimentation and see where it gets me.