It would be faster, in the same "faster" range that the vertex update is, but the difference would come from batching the calls (because there's no computation involved in hiding the object, it's a call to the engine layer in both systems). Instead of having to call setVisible 50.000 times per frame I'd call it once passing a buffer that holds the 50.000 ids of the objects to hide or show and the native code would do the loop.
By the way, these kind of batch updates are possible even with the current python system, there's no need for a new virtual machine, just a different "bge to python and back" api.
How would one make a bge to python and back api?I asked about this before.I would love to see demonstration.
So that i can turns lots of object visible to invisible and back again without framerate drop.I wanted to display cubes one row at a time over time for cubed ground like in block easy demo.
He is saying that you can modify current BGE API to accept a list of objects instead of only accept one object as it is currently.
This way you would avoid to call setVisible hundred of times per frame as you could call it only one time and you increase the performance.
I think you should read something about “api”.
I’m not a “professional programmer” but I’ll give it a try.
API=Application Programming Interface
Basically in this case the api is an interface between the Application (the engine) and the user scripts.
When you write python scripts for the bge you already use this api to say the engine what it should do.
Now what pgi did say, means that it sometimes would be faster to send the engine one command in a frame with some data to processing (for example a bunch of objects that the engine should hide) instead of sending the engine x times the same single api-commands to do somehing (for example to hide ONE object) because every api call costs some “translation” time.
So you as a user can only do what the api give you. So currently you can’t use this kind of batch updates as long it’s not in the api.
The above mentioned thread is about a fork that has a different API within it’s build.
As Maujoe already mentioned, it is not in the BGE API.
In general, you are dealing with thousands of object. It does not matter if you make them invisible. I’m pretty sure you better reduce the number of objects. Because each single object present in the scene eats processing time. Maybe not much, but it does. An object not present eats no processing time. Performance-wise no object is better than one object.
As far as I noticed you are going for some sort of mine-craft environment (lots of tiles). I think you need a different structure in your game than just placing every possible object in the scene at the same time.
Btw. You can even use LOD, with empty mesh as lowest detail level. The object is still present, but making the object invisible is much easier (as it is processed in native code).
That “bge to python and back” was my colloquial description of what is technically a native binding, which is how you interface a native program (bge) with a foreign execution platform (like python).
In python, the standard native binding api is the python c bindings api, which is what bge uses and what it can be used to add the new function we’re talking about.
It still something that you have to manually add to the bge codebase, compile and distribute with the official blender.
I might be confused about this then. The game engine still needs to iterate through all the objects in a scene regardless. I don’t know how batching will help since you still need to manually choose the objects to perform the tasks on.
It’s all about the cost of moving from the foreign environment (the python script) to the native layer (the C++ code), which involves a dynamic lookup to find the native function and the checked conversion of values from the scripting type to the native types.
When you call setVisible from python, python looks up for the setVisible function implementation and calls it passing the referenced game object and the two arguments, effectively starting the execution of the native counterpart of that function.
Now we are in C++/native code.
The native function checks if the game object being referenced to is valid, tries to convert the values coming from the python layer into what the game engine’s setVisible function expects (that is one KX_GameObject and two bool), if it succeed finally calls the setVisible function.
All considered, the overhead (jumping to the native function, checking parameters and doing the conversions) is not big.
But the overhead is cumulative: if you do the same exact operation for 10.000 objects, all those little overheads sums up and actually starts to become noticeable.
You can remove them. If instead of calling 10.000 times per frame the setVisible function, with one game object and two booleans, you can pack all the data needed in some buffers and call one native function, once per frame.
The native side, instead of receiving one pointer and two booleans, has now three arrays, it has to check that the first is an array of KX_GameObject pointers and the other two are arrays of booleans (chars).
The C++ code still has to individually call setVisible for each KX_GameObject but you have removed the need to do the 10.000 lookups for the native counterparts of the setVisible function and the 10.000 x 3 individual checks of the parameters.
Thanks pgi, that’s really interesting. Have you done any performance tests on this? Like when would you consider this worth doing for BGE? (i.e. how much faster is changing visibility on 10000 objects? Or how many objects do you need for there to be 1 second lag?)
Also this seems pretty intense on memory allocations, which when working with Python needs to be GCed (can cause some lag anyways).
I haven’t been keeping up with your thread in WIP, so you can just direct me to your posts there if you’ve already answered these questions haha
I didn’t test the batching scheme in the python environment, I did it for the experimental script system replacement, in the context of mesh geometry manipulation - which also benefits from batching. Using it to update 60k vertices of a mesh resulted in a ~300% speedup (versus the non batched update, in the same script system). I think the same would happen for the existing python scripting.
Regarding memory, you definitely don’t want to allocate the buffers in the game loop, you have to pre-allocate them in an initializer.
I don’t have a precise metric to tell when batching starts to be useful. In my experiments I noted that updating 10.000 vertices one by one could be done at 60fps, 60.000 dropped the framerate to ~20, batching the calls made the system hit 60fps vsync cap.
It might be worth pointing out that there aren’t many situations where you might want to do to 60k calls per frame in a script. Certainly not in the current bge design, that lacks a well defined (or even practical) “logic” stage.
Particles are an example of when you may need to and they were the main reason I tested this thing. LOD terrain is another area where you might want to be able to handle that many calls.
These, though, are all fields where optimizing the amount of native calls it’s only half of the picture because you also need to do math computations in the logical layer, which is a bit of a struggle to do in python unless you use a dedicated api.
Just to avoid spreading even more buzz (like what happened with the “python and back api”, which was a short description that makes absolutely zero sense out of the context in which it was used), let’s start by saying that “batching” exactly refers to the act of packing N multiple calls to a function that accepts M parameters into a single call to a different function that accepts a buffer of MxN values and produces the same results (be it a side effect or a return value).
So there’s no “terrain batching”, there might batching techniques to handle a geometry that represents a terrain.
That said, the short answer is “I don’t want to” :D.
The problem is that what you would like to see - which is a perfectly legitimate curiosity, and even an interesting experiment to do - requires some time.
First I have to create a destructible environment and test it. Then I have to find out if and how to take advantage of batching: it might not even be necessary, the way I see it we’re talking about changing some indices in a mesh. Then I have to modify the bge scripting api to implement the batched calls. This is horrible to do, one might naively thing it’s all about adding one function, but this is bge, I’ll have to write stuff all over the place. After that, I’ll have to write the actual script, and test it.
And in all this, we’re still talking about a test for a thing that changes another thing (bge) that we still don’t know if it is going to be here three months from now.
I’ll have to make you trust me on this: if what you have in mind requires a lot of calls each frame, on a data set that can be buffered (like indices or pointers to the vertices of a mesh) it will be faster to reduce the number of python calls into native functions.