For Blender duplicated objects using “instance copy” , can Upbge take advantage of hardware instancing ?
When you target modern graphics you target a minimal DX11 3D card like GTX 960.
For my curiosity, is regular armature animations getting GPU skinning from Upbge 0.3 ? as i read somewhere it was still CPU.
Also is Upbge particles addon (EasyEmit) using GPU ?
Or is there some plan to get GPU integrated particles from Upbge 0.3 ?
On our side, the plan was to use as much blender code as we can use to avoid to have to duplicate some code…
So we use eevee render… and blender animation system. One of blender targets as far I understood is to improve animation playback in a first time, then maybe improve animations “when no cache is desired” (I guess this means for realtime). I have no idea about the techniques which will be used but it’s possible that the gpu will be used to do part of the animation job…
Perhaps it’s better for compatibility and have less code changes to make with each Blender upgrades.
But it would be possible to have in Upbge custom addons or dll to play animations, do objects instancing or display particles using gpu.
When Blender get new changes, those addons would only need to check if some data format have changed, but all the process and display would be handled by Upbge custom code (like other game engines).
This way, Upbge having some engine features handled by it’s own code could open new possibilities, improve a lot performance or solve issues.
For example animated objects parenting in the editor is buggy, it would be fixed and stable if Upbge would have it’s code to manage objects parenting instead of relying in Blender code.
For example you create a tree model with textures and animations, you duplicate it in Blender editor to create your scene. When the game starts Upbge would be able to use Gpu instancing for displaying the trees models, so the scene could have thousand detailed trees without any frame rate slow down.
It is already available in Upbge ?
it’s a very very complex topic - the animations themselves are done on the cpu (bone transforms)
it’s deforming the vertex that we need to happen on the gpu
but there is the concept of modifiers - so the GPU skinning modifier must be the last node in the stack I think.
what would be best is to add a new armature modifier - I think - to keep everything compatible / easy to do merges
and out skinned verts pos/normals. cant remember if tangent & bitangent would be calculated at this stage. but you get the general idea. skinning in modern opengl is pretty straightforward in and of itself.
so the question is, how would you implement this in blender. i have no idea; inner workings too dutch. reading through moguri’s shader code its not too far off from what my engine did when animation was still skeletal. but the blender specific parts confuse me a lot.