I’m not sure what you mean by “each plane” of an object.
Sorry, I really meant faces or polygons of the mesh. I have edited the post.
That same object, divided into smaller parts, will yield better rendering performance as it is only partially rendered (of course, in reality the partial rendering is just that we render only certain objects which make up a greater one).
Thank you for this, that’s about what I was expecting. (This is getting added to my bag of tricks to be sure.)
The object has an AABB associated with it, and when it leaves the frustum of the camera (composed of separate planes) it is culled. A large object will be culled only if its large bounding box leaves the frustum.
Yeah, I’m pretty sure I at least get this part now. Basically, a simplified representation of the object (the AABB) is tested against the frustum. If there is overlap then that object must need to be rendered. Pretty straightforward really.
No the single planes are not culled. It is the complete mesh.
This means a large mesh like a terrain gets into the render pipeline even when only a small part is seen.
Given the above, I’m sure culling is done on complete meshes. However, I am still seeing single polygons of a render object disappear when they pass beyond the far plane of the view frustum.
You can see this yourself. Make a long cylinder, give it many planes. Set up a camera with basic motion. Run the BGE and move the camera along the axis of the cylinder. When the end of the cylinder gets far enough from the camera, polygons will start to disappear, but the object as a whole will remain.
Surely this is some form of polygon level culling? (and it’s not related to backface culling, these polygons still face the camera.) Culling on the front and back of the frustum maybe as simple as a Z-depth test, but I highly suspect that the BGE is culling faces past the sides of the view frustum too.
On the other hand a lot of small objects add to the render time too. so you need to find a balance between mesh size and number of objects.
Agreed. A balance must be struck.
I figure a fair rule of thumb is that, poly count being equal, no one object should be outside… say… 80-120% the view frustum volume. Any smaller and you begin to needlessly increase the object count inside the frustum, and any larger and you risk having many polygons for an object out of the frustum. Of course, a pragmatic approach would need to account for polygon density somewhere in there too.
. . .
In any case, I’m guessing that there are maybe two culling phases going on in the BGE. The first is the less expensive bounding box phase. This phase would quickly cut down on huge swaths of polygons via the association with the parent mesh. The second phase is some sort of expensive polygon based culling, probably involving some kind of mesh traversal, polygon sort, or whathaveyou. This second phase would (in theory) benefit the fragment shading step by not filling fragments that aren’t even going to be displayed.
But this is just my baseless speculation. The mesh representation and the amount of texturing/shading no doubt affect the practicality of the second step. Also, it might amount to saving the GPU by burning the CPU, which would be selling the car for gas money.