Anatomy application: File size limit and re-creation of blender basic functions

Hi folks,

I am trying to create an open source 3D atlas similar to ‘Biodigital Human’ ( or ‘Visible Body’( within Blender.
Here is the project:

Now that UPBGE supports Eevee and the collection system, it could be worth trying to transform all this content into a ready-made anatomy viewer, using UPBGE.

My first question is:
Does it make sense trying, if my file is about 190 Mb and will not get much lighter?
The abuse of the RAM is already an issue with the original Blender and, as I tried to run a game without any logic, it freezes and the memory stays around 15Gb, which is even higher than the original Blender (about 13Gb average). The exported .exe jumps from 191Mb to 387Mb; as if it did not manage the instances of the symmetrical objects.

Second question:
Would it be complicated (I have good notions of the BGE) to recreate an outliner with eye icons, and the selection and hiding showing functions of blender through logic bricks?

I do not mind to recreate the whole hierarchy, and to invest time on the logic but have to be sure that it can generate a usable .exe, ideally running smoothly on most devices.

You mean pc, blender does not run on anything else.

Yes, on most PC; I mean not only the professional ones.

Sounds like you could do with an LOD (Level of Detail) system that swaps out parts with higher quality parts as you zoom in. The trick is not to have all of your massive models showing at the same time. Even AAA games need to be careful with that too.

What’s taking up most of the space in the RAM? Massive textures? Massive vertice count? Both?

Can you include a picture from edit mode that shows the vertices for one of the parts?

The exported EXE combines your blend with the blenderplayer, so have a look and see if your increase in file size matches with the size of the executable it was merged with.

You need to review the structure of your project this is the first, perhaps many files cannot be stored at the beginning of the executable file, but are loaded via LibLoad (path+asset.blend, “Scene”, load_actions=True, load_scripts = True, asynchronous = True) to avoid unnecessary texture calls if they have a high resolution, use texture atlases - this is when the main textures are diffuse, normal map, mirror, environment, occlusion are located on the same texture, but use different UV layers use the dds format for textures, it will save you about 1-2. 5 MB of size per texture. If your models are sufficiently detailed, exclude physics from them and, if necessary, replace their physics with simplified collision models to reduce physical calculations, use nodes for textures and node materials settings based on GLSL, this will give you a slight increase in the visualization of these textures and it will give you effects that you can get only through scripts - for example, halos, dissolution-saturation of the transparency of materials at a distance from the camera, replacement and mixing of textures at distances from the camera or lighting source. Also use model detail levels if they use a lot of polygons - don’t forget to apply modifiers after creating the LOD, try not to use models with modifiers other than the skeleton modifier. it seems that these are all the basics for optimizing the project - if I remember something, I will write it-good luck to you

1 Like