Can anyone tell me if %30 game logic is bad or not? When I start the game it sits at %10, then I move my mouse to the center of the map and it goes to %30. Also, I have a “main.py” script that’s got an always pulsing. It makes it a lot easier to program since I don’t have to update scripts. They’re just always updated. I’m not sure if this is the best way to do it though. I like the idea of not having any true pulse on any scripts but then I have to do a lot more work programing and updating scripts as needed.
total ms logic is what you need to worry about,
in a full game - to have 60 fps it all needs to happen in 16 ms
Design efficiently the first time but don’t go too overboard.
Wow. 10% to 30% increase in game logic is a substantial increase. I think you ought to check your logic, as well as [Window -> Toggle System Console] on the left hand corner menu to see if there are any errors.
Profiling
As BluePrintRandom said, the overall time spend is what matters. The percentage is used to see where to look to reduce the processing time. You need to do that only if the used time exceeds the available time (1s/60).
The used time depends on what is currently needed to process a frame. It depends on the events (logic), the view (render), amounts of objects (render, physics) the relation to each other (scene graph, physics) and many more.
When the overall processing hits the limits it makes most sense to look at the aspect that eats most time as there is much more potential time to save as in aspects that need less time already.
Efficient logic
Always sensor with enabled True Level Triggering.
There are situations when this is necessary. In general it is not.
The SCA (sensor-controller-actuator) architecture let you easily describe the “When happens What” statements of your games behavior (logic).
It is an event system. The sensors measure the events. The controllers and actuators act this events.
In general it is efficient to act on events only. This means
A) you do something when it is necessary to do.
B) you do nothing if there is nothing to do.
Detecting A) or B) should do the sensor configuration.
God objects
Your approach of a “main.py” - indicates you do not know how to express what your code is supposed to do. This can either be because you do not know it, you did not think about it, you want to decide later.
I recommend you do it early - very early.
When you do not know what the code is supposed to do how do you know it does what it should do?
When your code is supposed to do many things, that a description would be way to long -> it is mostlikely doing too much.When it is doing too much, it is hard to maintain. Often it creates unnecessary dependencies between aspects that do not need dependencies.
This often leads to “god objects”. God objects try to do everything. It is possible, but really hard to handle.
Your example is such an god object:
Always -> main.py
translates into:
All the time, do everything.
Isn’t that what all games do?
Does this sound efficient?
Does it sound like you know what this “everything” is?
Doesn’t this transfer the event processing from the native code sensors to interpreted python code?
Doesn’t you need to connect all possible sensors to this single controller, just in case you need an event somewhere?
Doesn’t you need to to split your “main.py” code to deal with different aspects of your logic?
When you need to split your code anyway, isn’t it better to split the configurable logic too?
Isn’t it more efficient (in development and/or processing) to focus on small aspects and isolate them from other aspects?
…
God objects do work.
… and create a lot of work too ;).
Just something to think about.
As BluePrintRandom said, the overall time spend is what matters. The percentage is used to see where to look to reduce the processing time. You need to do that only if the used time exceeds the available time (1s/60).
I’m sorry, I’m going to have to (partially disagree) with this. Percentage is typically a good indicator of where to reduce processing time, as Monster suggests, but this is exactly why you should check where percentages rise the highest (with the exception of elements you can’t really control such as overhead), because also in many cases, those rises are where the most ms increases are happening. However, it is also true that in the end, the ms stat matters.
i have all my logic loop scripts (not startups obvious) running all the time. period. each object runs logic relevant to itself with minimum interaction, if so, very transparent.
i just have true/false properties dictate whether the idle logic runs or the interactive logic runs. then i have some objects added and removed per command of itself or the master logic (on the player). this method rarely exceeds 10 percent, usually 5, at a solid 60fps. i also run ~98 percent python.
this may not be the best way, but my setup requires lots of interaction between player and objects, and needs to be updated every frame to prevent loop holes and bugs.
one last note, mouse over sensors, python for loops, and physics updates are really expensive.
Objection! [to the “main is bad” idea]
given that the set of available logic bricks cannot be extended without breaking the runtime portability of a bge application (that is you can extend it but then you have to ship your own version of the blender executable along the game);
given the limited amount of statements that can be expressed using the available logic bricks - even more so if we consider the convenience of using bricks to express them;
considered the wide range of statements needed by a game;
the “main is bad” request should be rejected, to the point where we should actually say “main is the only way”.
I have experimental evidence to support this. In fact I wrote an add-on just to test it: the “don’t use it, bge logic add-on”. The set of statements that can be expressed with it is a super set of the embedded brick system, yet no matter how many new nodes I add, I always found a corner case where either a statement cannot be expressed by combining existing nodes or doing so results in an unfathomable mess.
At its root, I think it is a domain issue. The logic of a game is not a domain specific problem and as such it cannot be represented with a domain specific language: you need a turing complete language. To my knowledge, there is no practical way to use a language like that with a non text based user interface.
For those reasons, I think we should really avoid bricks as much as possible. Unless someone comes up with a super cool new visual scripting system. I’m all for shiny things.
Percentages alone will not necessarily tell you what is most needed. Percentage rises, as I’ve stated, will in many cases.
I have a game project, complete with several hundred objects, and a class implementation structure which I import as a module to almost all of these objects to extend functionality.
Ex. 1) Video processing. I try running a video image texture over a sphere object running at an always sensor with 0 delay, and the rise in the actual logic turns out to be +20% (from 10%). The increase in ms is 2 to 10 ms. The fps drops from 60 fps to 30 fps.
Ex. 2) Particle systems. I have an easyEmit emitter in-game. Whenever I use it, the logic bar increases from around 5% to 80% (or around 25% for just one system enabled). Guess where the most processing time sits. In the logic.
Ex. 3) Physics. I have various objects with varying shapes. If my player character hits those objects, you can see sharp rises in the physics percentages, and that is actually the factor which contributes a drop from 50 fps (every collision with those objects). Box collisions make it so much easier.
Please understand that I am trying to communicate my experience with users and not simply going off of mere conjecture as well.
Stop worrying about percentages. Percentages are predominantly useful for directing you to the most expensive aspect of your frame step. Simply put, it tells you where to start optimising. It doesn’t tell you if you need to optimise.
You have (1/60) seconds ~ 16ms to step a single frame. This means that if you rendering takes practically no time, you have 16ms to step the logic and physics etc.
Time is crucial. Though performance will change across different platforms, the general rule is the entire gamelogic shouldn’t be in the 7ms range. That’s ludicrous. High end games are maxing out their calculations per frame, so a few animated cubes shouldn’t demand 5ms of logic control time.
If you see spikes in the profile times at distinct events (e.g collisions, keypresses) then you can deduce the logic triggered (or physics processing) by these events is expensive. If it spikes 50%, but you’re at 0.1ms, there is nothing to worry about. If it spikes by a delta of around N ms… that’s a big problem (usually).
However, don’t worry about performance at this stage. Start dealing with it when you’re not hitting the intended framerate. As developers become more experienced, they write more performant code and designs naturally anyway.
Time is crucial. Though performance will change across different platforms, the general rule is the entire gamelogic shouldn’t be in the 7ms range. That’s ludicrous. High end games are maxing out their calculations per frame, so a few animated cubes shouldn’t demand 5ms of logic control time.
Mhm, I agree.
Stop worrying about percentages. Percentages are predominantly useful for directing you to the most expensive aspect of your frame step. Simply put, it tells you where to start optimising. It doesn’t tell you if you [I]need to [I]optimize
[/I]
[/I]Actually, for what I’m working on, I do need to worry about percentages, because I have constantly seen rises in the percentage for where I have implemented complex things (logic too complex, physics too complex, etc.). It is like when you say “percentages are predominantly useful to directing you the most expensive aspect of your frame step.” But this is why you need to check to see what you are doing with your logic/physics/etc. I am not saying percentage rises will mean you have to change your code/configuration. But also keep in mind, ms can pile up later, so it’s a good idea to check for rises early and fix early. Just because your implementation doesn’t drop your fps doesn’t necessarily mean that it’s an entirely good thing.
The asker’s problem - just moving a mouse to the center of the map increases from 10% to 30%. “What is happening with my logic?” you should ask. Why does it increase from 10% to 30%? Why doesn’t it just stay around 10%?
That being said, yes there are instances where percentages can be totally misleading as well. In my game project, my rasterizer shoots to 50% when I include a reflective material. This doesn’t slow down the game running at around 60fps. When I don’t use the reflective material, this percentage drops. The game still runs at 60fps. But in this case, the percentage doesn’t rise in game. It starts and stays at around 50%.
Like I said, it’s a good idea to check when percentages rise.
You have (1/60) seconds ~ 16ms to step a single frame. This means that if you rendering takes practically no time, you have 16ms to step the logic and physics etc.
I was not aware of this, thank you.
be careful of the complexity of physics meshes
mouse over sensor on high detail triangle mesh physics objects is very heavy
use approximations made of many objects using simple physics types that are compound parented to a root object (this way the simple bound child shapes ignore colliding with themselves)
if the ground is 1 large mesh, you may want to break it up into a grid (for physics, LOD and even dynamic loading)
There’s not any errors. I open the consol every time I open blender. Most everything I do is code. The only logic blocks I use are mouse sensors and always. It jumps when my mouse is over an object but it shouldn’t because the mouse isn’t pulsing.
That was very informative Monster and will be reconfiguring. The main.py is one of 7 scripts at the moment.
“pulsing” - nice word.
But this does not fit here. Sensors are ALWAYS evaluated regardless you think you need them. Therefore you need as less sensors as possible (you still can have hundreds of them at the same time in your scene).
The mouse over sensor is a ray measure (it calculates a line [under the mouse cursor] hitting physics mesh faces). This is additional work of the physics engine. Each mouse over sensor will add up, regardless the fact it is always the same line as the mouse cursor has just one position within a frame.
The sensor settings do not matter at the sensor processing time. It matters on the controller processing time, especially when it is a Python controller. Each time any sensor triggers the controller, the controller will be executed = eats processing time regardless of it’s output.
I am struggling with really high Logic, Physics and Rasterizer frametimes in my game and it’s hard for me determining where this is coming from.
When I blend the whole level out and keep just the player, his particle effects and the skybox, the average frametime is about 16 ms (the frametime for Physics and Logic around 3-5 ms)
In the gamelevel at some parts of the level the frametime shoots up to 45 ms. I am using a lot of LOD-objects, a water shader, node materials for the terrain.
I always thought that the complex Logic of the player would be causing this lag but it seems that it is the level itself that is causing this.
Therefore some questions:
-
Do you guys have an idea how to reduce this?
-
Does using too much LOD cause lag in Logic?
For optimisation I tried this:
I just use simple boxes or cylinders for collisions, the objects themselves have no collision.
I already reduced the camera view distance to 300m.
Baked all materials and its textures of the object to one texture, in order to avoid multiple materials.
Every object has LOD.
Small texture size (256 or 512)
Reduced emitted particles.
And some more I don’t remember.
Here are some screenshot with the Frametimes:
Hope someone could help me out a bit.
Thank you
Can you test the game in Upbge? My guess is that you have a custom shader?
Oh I tried to run the game with newer blender versions or with UPBGE but it doesn’t work with them. In newer official versions things get messed up and in UPBGE blender crashes right away when starting the game pressing P.
That’s why I need to stick with version 2.76b where everything works fine.
What about the latest 0.1.2, its stable?
Yes, also with that latest version blender crashes completely. (no error message or smthg., blender just closes)
Thats no good! Do you have a lib loaded files, linked files? Meshes with a single vertex and what OS and video are you using? Try to disassemble the game, that way you could find whats cosing the crash.