the best method, is to build group instances of objects made of reuasable sub objects that are each under 128 vertex, ‘IF’ there are thousands of them and there is a long viewdistance, like trees, or buildings made of modules, or??
bge does not currently support instancing but upbge has support for it,
if all the subobjects use 1 material, and there are only a smallish number of geometries,
it can really save on draw calls, per material and instance*
you can have 1000 objects all using the same very large texture cut into chunks using uv offsets unique to each copy* 1 larger file I think* if the card is big enough is preferable from what I understand*
“5) It’s good to atlas all of the bits of texture used by one object so that the entire object can be rendered in one draw call. Within reason, its good to merge statically related objects and atlas their textures together so that they all can be rendered in one call. After that, it’s good to atlas the textures of objects that use the same shader to give the renderer the opportunity to skip changing textures between draws.”
just remember you need room in the Vram for all that, but if it’s a newer card, that is why AAA titles run so fast from my understanding (one big material used over and over in clever ways)
what your making sounds like maybe a terrain editor?
There is a thing called bachelor thesis, it’s very angry and not writing itself…
I tried, but but couldn’t get videotexture ImageBuf to work as expected. (image size, color parameter, plot function, nothing works)
But my quick test showed that 16k on a plane worked 60fps.
Whether it was actually working couldn’t tell as of Imagebuf being only black.
Also I dont know if i can create textures (not overwrite) during runtime.
As it’s only a hunch, didn’t really know if it reasonable or flawed.
Had an idea of how to reduce meshing for voxels to absolute minimum.
Each voxel chunk has 3*(axis size +1) planes where faces can be. (slices)
So if at least 1 face exists on a plane there has to be at least one polygon.
Now if I could map all the voxels with a single polygon, that would be least amount of polygons possible.
So I think that by mapping a voxel’s face only to a pixel on a polygon I can increase performance.
(the voxel faces would be single color or with shader tricks even textured)
I think the worst case calculation would be
CHUNK_SIZE = 32
PLANES = 3*(CHUNK_SIZE+1) = 99
PIXELS = 99 * CHUNK_SIZE ** 2 = 101376
DIMENSION = 101376 ** 0.5 = ~320x320 # per chunk
Best case would be only 6 planes with 1 pixel = 3x2 image.
A current mid-high card has around 4gb of vram.
I also vaguely remember that system memory is shared with intergrated gpus, so…
Maybe possible?
Wasn’t clear enough. Yeah, I meant to stack planes in all 3 axis with alpha.
And as said, most of the time I wont even need all planes or pixels :D.
Concerning the example, isn’t it because animation? pretty sure you turned it off, tho.
Maybe I should then test alpha texture limit first.
Would spawning a lot of planes with same alpha texture suffice?
Or should I procedurally generate them…
Also, thanks for the insight, getting things in perspective slowly now.