New logic Design Discussion

Does anyone have any good ideas for game logic?

I have been thinking more and more about using drawers / containers as a node graph hierarchy…

deeper nested logic = later execution

each ‘function’ (python or logic node) can be edited once selected and contain hyperlinks to nodes they connect to.

Hmm, I don’t know about what ever.

Would be good if we didn’t need to use cubes for characters and so on. That is very annoying.

You can use actual character setups in BGE games, but you might face the issue that the engine does not have an optimal workflow for creating reusable assets separated into multiple files (you can use group instances in the same .blend file, but it still means the engine is optimized for having the game in one large file).

For example, months ago and even know, I didn’t know how to get a robot working. Then I had to use a cube, I only realised that when somebody helped me with a mini game in relating to the project I had cancelled at the time.

I found that annoying.

you can have the physics object be a small cube at the actors center of mass and use empties parented to each bone, to move and update objects position and parent to a root object each frame using compound parenting.

this makes a actor with accurate physics.

there is a example in resources I made.

Back to topic: it seams the UE4 blueprint is a good start for ideas. :slight_smile:

If they are going to go that route i hope there is good documentation.Because i could not figure it out in unreal game
engine.The part i was having problems with is getting a actual door to open and close.I like the bge logic bricks for that reason.

the node logic editor may be the way to go, you can actually make the same as the logic bricks but with far more order in your logic, i think that would be a good starting point, i´ve been tryin it for the last few weeks and its pretty good, and when logic gets kind of dense is a relief to actually not have a logic brick saturation in the logic editor window. And actually works kind like the UE BluePrint (for the stand i´ve never tried UE, its from the photos i´ve seen and what i´ve read). Looks pretty close to the concept, the thing is, maybe expand the logic node system further more to actually create a custom script output as a sum to the actual logic provided by logic bricks. Maybe thats the plus the UE Bluprint has over logic bricks and Logic Node Editor.

From what I’ve read, the problem with Unreal’s Blueprint system is that it tries too hard to be programming (but with blocks instead of text).

A node system in the BGE would only work if it doesn’t actually try to be a low-level programming solution, much like what it already has with the bricks (but you would also have a ‘script’ node if you need finer and lower-level control). In short, they would be higher level and present a complete game mechanic via a GUI (much like what the bricks have now).

I think that sensors that evaluate using properties - and can set properties if true, with actuators that can use properties set by a sensor, it would cover many situations.

Ray(Health)set prop Target= hitObject -----------and----------add -5 to Target [Health]

Radar(set prop Target= hitObject)-----------and-----------Steer to Target

without complicating things too much.

In my opinion, it is be better to just drop the idea of a visual scripting solution (including logic bricks, or using nodes). It would be better to just implement a component system for scripts, like the ones of Unity (behaviours), or nodes in Godot. UPBGE also have components.

In my opinion, visual scripting does not deserve all of the negative press that it receives.

The summary of my position is that it is less compact to express a programme in its primitives using visual scripting than it is to write the same in code, but that is perhaps the same class of problem that pure-code programmes face when it comes to reusabillty and constraint.

Visual programming usually requires design to an interface. This approach encourages the programmer to think in terms of interface. It also encourages programmers to think in a more functional oriented manner rather than falling into pitfalls of EIAO (everything is an object). In many ways, visual programming languages are conceptually equivalent to language compilers, and inspire many of the “assembly is best” adages.

indeed, visual programming is not the final solution to programming pitfalls, nor is it (as aforementioned) the best means of writing particular types of code. However, it is very good for establishing relationships, and can find itself particularly useful in expressing event driven programmes. In the realm of game engines, this particular strength is most apparent, given that logic bricks are on many occasions useful for developing quick (and simple, as a consequence of their limited grammar) programmes.

The best aspect of HIVE for me has always been the interoperability of pure python code and the Hive objects that represent a visual node. I can choose how fine-grained my programmes are, and employ the power of the Python language model.

From this experience, I would strongly emphasise how important the Python API is to these sorts of developments. I think developing a standalone tool in C++ like logic bricks is strongly a wasted enterprise. Better to either provide first-class C++ access (and then develop this tool) or improve the Python API and build the tool upon that base. Certainly, API comes before visual scripting, simply due to the fact that the former permits the latter development at a later date

That’s not actually true. Groups can be linked from external files.
Paired with the blend library addon you can (almost) compete with the workflow of Unity/etc. I say almost because you have to write a script to read groupObject properties and the designer has no idea which properties are valid without external documentation.

Thing is… it’s not in the manual. And nobody really exercises this workflow in tutorials. Thus it remains unknown to most users.

Visual scripting
I couldn’t care less, so long as there’s some sort of component system for designers to work with.

You can use external files, but the workflow is pretty clunky if you have a need to jump back and forth between the main game file and the various asset files (there’s currently no one-click solution through an asset browser and there’s no ability on Blender’s side to have multiple .blend files loaded at once so as to allow rapid switching with something like tabs).

To tweak an asset you have to open the file browser, search for the file, and open it (closing the main file in the process). Then you make your changes and you have to go back through the file browser to open the main file in hopes the changes still fit in well. If they don’t fit in well, repeat step 1 to tweak further.

Now of course we have Bastien working towards a proper asset management system that will likely make this type of multi-file structure far easier and faster to work with, but until we do the BGE is still optimized for single-file games.

you can have multiple copies of blender open and alt tab between them**

In my experiments, building a logic system for a game using python is not advisable. First I think I have to establish than for a logic system I mean a system that allows the user to control the behavior of the rendered objects, and that goes from a ui text field to a particle system.
While python brilliantly solves the problem of having to face the abysmal C++ development stack (I’m looking at you, CMake & Friends), it lacks the properties needed for the “particles extreme”, that is non-green multithreading and escape analysis (so that the vm can strip out the indirect calls that we human love but out computes do not).
Fortunately, doing that in C++ is no problem at all and I can say that because even a monkey like me can do it.
This is a reference implementation that I wrote as a pet project using the upbge codebase:

That thing offers support (offers because while it works, it’s empty) for anything that is not related to rendering, from asynchronous scene loading to multi-threaded animation. For the curious, it works in the context of a main-thread engine like bge because it allows a computation to specify a preferred thread of execution, so one can compute the location of a bunch of particles in a background thread-pool then pass the result to an updater unit executed by the renderer thread to update the rendered objects.
What’s the problem? That the current python api becomes an unnecessary burden, as in “strip out of the bge codebase anything that is related to python”. After that one can re-write a python interface to the new system, even one with the same name spaces that the current api has. I don’t see that happening. As I don’t see the codebase starting to use modern C++ instead of “±C”.

Indeed Python does not make for the best performance, and writing a particle system using it would be ludicrous.

In general it is advisable to have a symmetric C(++) binding generator, which allows relative ease of transposition between different languages. There are other benefits of using Python besides its syntax and rapidity with which a prototype may be realised. There is also its standard library, which in many cases is leveraged inside of an interactive project.

@pgi, the bigger task at hand is producing a thread-safe, structured application where threading might be used to reduce task load.
The BGE does need to remove its python bindings, in the same way that blender ought to, and move to something like a binding generator. Hand rolling the Python C API is just ugly, even when aided with macros. (Yes such a generator uses macros, but language agnostic).

I’ve gotta ask: is there a genuine threat of losing the full Python machine in the BGE’s future? Having unabashed access to a powerful and readable scripting engine has been a major point of distinction for me, and losing that would essentially lose the engine for me as well.

I’m a teacher, so I know a bit about introducing people to new systems and making it understandable.
For a lot of people code is a wall. They need some help to get over it. Visual programming is useful as a ladder, so that you can learn the basics of game design (there’s actually a lot to learn without adding code to the problem), and then once you know what you’re doing you can start doing it more efficiently with code.

One of the best things about the BGE is that you can learn how to move a character around in an afternoon using logic bricks. After a week you can have a rigged character walking and jumping. Getting beyond that stage takes a long time, because logic bricks are not very good with dealing with instances. They are no use for passing data like targets or distances. You spend more time trying to work around the limitations of the system than you would just bypassing it and getting on with learning code.

I think any visual programming system for blender game engine either needs to be deliberately bare bones (just enough to get you in to moving a character around, playing animations, registering collisions etc…) or it has to have better ability to pass data between the various elements of the system.

In the first situation, you’d be forced to move on to scripting and not waste your time with convoluted work-arounds. In the second situation, there wouldn’t be a need for it since the system would be better able to do the things you need.

Your character picks up a sword and the sword goes in to the inventory.
You drop the sword and it falls on the floor.

Easy in logic bricks, you have an overlay which gets a message from the player when the sword is picked up and deleted. Clicking on an icon in the overlay sends a message back to the player and the sword is deposited back in to the game world.

However, this required a single set of logic bricks for one item, in this case a sword. What if you have 20 different kinds of swords? What if your game has over 100 different inventory items? You’d need a set of logic bricks for every one.

The visual scripting system needs to be able to receive a message and get data from it. Then it needs to use that data to set the element which handles the data.

In a kind of pseudo code:

if self.message_received:
    if self.message_received.contents in valid_objects:
        self.add_object.object = self.message_received.contents

So there needs to be a way of actuators using the result of sensors. In other words sensors need to return something other than True/False and actuators need to take arguments beyond just activate()

I guess in a logic nodes type system nodes need more than just one socket.

Why can’t we get rid of the wires in visual programming completely and have the sensors and actuators just work.
People have been complaining about the wires.You could have the name of the character,npc or other thing on the sensor and actuator.And it should ask you if you want to name groups of sensors and actuators.Then let you.It could ask you if you want to name a group of actuators and sensors you have not named yet.