Proposal for Bypassing logic bricks for sensors

In the thread discussing nodal logic it was suggested that python only logic should be made better instead/as well?
I dont see these as mutually exclusive and while looking into the current even system I thaught of this way we could have Python Only events.

http://wiki.blender.org/index.php/User:Ideasman42#BGE_Python_Logic

This isnt even that hard to impliment, and can coexist with existing logic brick setups nicely.

I’m not a heavy programmer python type… but I like the idea of being able to bypass the nodes etc. and just use python… I mean, like the idea of having both options… like using nodes when prototyping… then moving onto python for the complex/final version etc. Thanks for suggesting this proposal.

Hi ideasman,

This sounds cool. There is a potential loss of overview as the user could not immediatly see that a script might react on events that are not displayed within the UI (no sensors to see). But it would provide more powerfull scripting capabilities.

Would that event queue store all events (also from other objects)?

Here are my ideas that cross my mind:
How about an “event sensor” or “event queue sensor”? That could fire as soon as an event is in the event queue. As a side effect it would show the user that a script is reading the event queue.

As an option it could also provide the possibility to filter the events. I think on
“all events”,
“global events” (object independend events like keyboard, mouse),
“local events” (events on the current object like ray)

I like the callback idea.

@Monster, good point, while thinking of this I was looking at event managers and not thinking about sensors that are local to a certain object and have their own settings.

global events are simple and I covered that but local events are not :confused:

Cant think of a way to do this without using logic bricks since the events are based on logic brick settings - ray cast direction for eg.

As well as having a real logic brick youd need to use the sensior->controller connection so the BGE’s logic manager would know if the sensor was active and to evaluate it.

You have a command on the python controller that allows you to register a python function that gets directly called when a sensor is true. But this seems a bit clunky and not that different to having a sensor call a python script directly.
Hints on how this could be made less crap welcome :slight_smile:

Take 2… this is an imaginary api but maybe not SO far of being possible (the subclassing stuff is there)


class MyObject(GameLogic.Types.KX_GameObject):
    def onMouseDown(self, event, buttons=[0,2]):
        print("which button was pressed?", event.button)
        self.loc *= 0.1
    
    def onRayCast(self, event, rays=((0,1,0), (0,0,-1))):
        print("which array was hit?", event.ray)
        
    def onKeyDown(self, event, keys=['UP', 'DOWN', 'LEFT', 'RIGHT'])
        print(event.key)
    
    def onCollision(self, event, properties=['ground', 'kill'])
        sensor = event.sensor
        
        for ob in sensor.hitObjects:
            print("HitObject", ob)

For this to work the class would be inspected, and internally sensors could be added to match the class, but users would not need to make the logic bricks.

One thing that is odd about this is that each class function would have to be multiple sensors in the BGE. Having the sensor settings as a keyword argument for the class is also a little odd.

Oddness aside I really like having 1 class that describes all the logic for an object without mixing with logic bricks.

I like the idea of bypassing sensors but I’m more interested in bypassing the python controller, if somehow you could register a .py file in the world tab (2.5) that would run it or call a certain function such as init() the first time it runs and main() every logic tick. Something like a script brain.

Also, is subclassing in 2.5 builds?

I like the eventRegister part…its really what I miss, but attaching a sensor to a python controller is what caught me on a first place when I met Blender GE. I think that visualizing in logic bricks is what separates blender GE from other GEs and made it so popular.

If you really want to get rid of the UI sensors, you could create sensors with python. Sensors could be added and removed dynamically (similiar to properties). The parameters for these sensors could be provided within the constructor.

The controller already provides a list of sensors. So it should be easy to to get a reference to the sensor and add a listener to any sensor (not the KX_GameObject).
Each sensor class could have it’s own listener class.

Maybe that is an idea?

@Monster, this possible yes.
If python can dynamically create and remove sensors it opens up the oppatunity a lot for new systems to be written/tested in pure python. where the conversion is doen behind the scenes from class+function introspection or whatever.

The way it could work in an all python setup is to have a class attached to each object (could be a py file on each object - yes bypass pyControllers)

Then you could perhaps attach a class to teh scene too for global events?

Its tempting for me to have something very similar to Macromedia/Adobe Director which is what I first learnt programming with. This has the concept of behaviors you can attach to objects (sprites), but you can have global behaviors too.

use an event handler where the python programmer can bind callbacks to

for example something like:


from bge import event

def my_keyboard_handler(event):
    if event.keycode == event.KEY_W:
        go_forward()

event.bind_handler((event.KEY_PRESS, event.KEY_RELEASE), my_keyboard_handler)

it could be also implemented as a decorator

The idea of sub classing a object sounds cool. Using a system like that, there would be no need for logic bricks, and you can keep all the functionality in nice organized python code :eyebrowlift:.

I prefer callbacks.

Actually, I already recommended such a system in a similarly themed thread:

http://blenderartists.org/forum/showthread.php?p=1332752

The example code, reposted for convenience:


# Blender should generate a base class for each 3D object in the scene
# containing mesh, ipo, texture and other relevant "object data".

class My_Cube(Cube):
    def __init__(self, posList):
        self.setPosition(posList)

        self.health = 100;

        # I would recommend AS3 style registration:
        
        self.addEventListener(KeyboardEvent.KEY_DOWN, self.keyDownHandler)
        self.addEventListener(KeybaordEvent.KEY_UP, self.keyUpHandler)

        self.addEventListener(CollisionEvent.DYNAMIC, self.collisionHandler)

    def keyDownHandler(self, key_event): #event information passes to handler
        if key_event.KEY == GameKeys.W:
            # If values other than 0 are applied, it should just activate
            self.applyLinearVelocity(0, 1, 0, 1)
        elif key_event.KEY == GameKeys.S:
            self.applyLinearVelocity(0, -1, 0, 1)

    def keyUpHandler(self, key_event):
        if key_event.KEY == GameKeys.W or key_event.KEY == GameKeys.S:
            self.applyLinearVelocity(0, 0, 0, 0) # zeros -> deactivate

    def collisionHandler(self, collision_event):
        if collision_event.object.getName() == "Monster":
            self.health -= 10
            if self.health == 0:
                self.EndObject()

Note the comments in particular; I don’t simply want to subclass a general “KX_GameObject” -> Have blender generate ready set classes that already contain all the specific mesh, animation, texture data, and then we can subclass those.

PS: Thank you for considering this, at the very least.

Another thought:

If you think about controllers are nothing else than listeners to sensors.

To go ahead with a full python setup, the controllers and actuators could be dynamic as well. So you could have one or more scripts that add/remove/disable/enable sensors, controllers and actuators of any object.

Some of that could be available to the logic bricks as well by adding controlls to enable/disable sensors.

I agree with Monster. I don’t mind how logic bricks work at all. My only problem with it is that we cannot dynamically add/remove them in real time. If we could do this for all logic bricks, plus being able to set global bricks for each scene as Ideasman suggested. Then the bge logic would perfect.

Because then we could have all our code scripted without having to worry about the visual interface at all. At start you simply link one script to your scene, and you build all the logic for all objects from there. Without having to click on each one on the viewport. This would let us organize our code much better.

You could even choose if you prefer to have an event queue or event callbacks yourself. Either by linking all sensors to the same python controller if you want to use an event queue. Or by linking each sensor to a different controller if you want event callbacks. Since we’re doing all this linkage via python we don’t need to worry about the crossing lines mess that this would cause in the visual interface. We could still have a clean, easy to follow code doing this.

Specially with sub-classing on 2.5, where we could simply add the sensors to the object on the init function. It would look similar to the callback example code that Social and Ideasman just posted. Except that instead of setting callback functions we would be setting sensors and controllers that do the same thing.

Another thing that would help a lot as well is losing the dependency on actuators. That “cannot activate actuators of non-ative controllers” error message is a pain to handle. Would be great if everyone could use every actuator. Or even better, if we had an alternative function for every actuator. But I suppose this is already on the to-do list, right?

Will this break our existing python scripts, or will it more like you’ll be able to add these new functions to our existing scripts so you can remove the rest of the logic bricks from your logic setup?

This sounds cool. There is a potential loss of overview as the user could not immediatly see that a script might react on events that are not displayed within the UI

You can always fill up your script with comments, and also section up a large main script if you have one, or you can just have multiple python scripts and allow execution of another script from within that script.

I like the idea of a class attached to any object (with a .py file, by example) - choosing to use it or not can be a simple button in the ui, or a specific sensor (plus a field to designate the .py file !).
This way global events can be attached to the scene or to a null object, this can simplify alot reusability.

The sample code Social and cyborg_ar are talking about looks like Flex/As3 syntax, based on events callbacks. This way it’s simple to add/remove callbacks in any time.

Personally i prefer callbacks as i practiced flex a bit but i don’t know the techincal reality of implementing this into bge…