Texture map, draw calls, and memory use

Hello all,

I use to make still images with a lot of Daz caracteres.
With many texture maps. And a quickly memory saturation.
I already share materials between items when it’s possible
Some times use redimensionning but it’s really time consuming.
The “simplify” option is great but treat all them as a whole.
(I dream of a “Simplify” node we could put when and where needed).

My question is about the memory treatement of textures in Cycle.

If there are multiples calls of the same texture but in different materials.
Are they combined in only one call ?
Or they count each times ?

And to go further, is there a Cycle description more detailed than it is the in the “Blender Help”.
Or a “good pratice manual” ?
Where we could found ( just example) the weight (memory use ) of each of these little things we use to have a great picture. :wink:
Sure, some nodes are more “memory eater” than others.
But how much and why, stay a little “blurry” for me.
So, any info will be welcome.
I thank you in advance.


1 Like

I don’t know if there is a description of best practices anywhere, but i can maybe help.

if an image texture is used in 2 different materials, it will still be stored only once in memory.

Image textures will use memory very quickly if you use large resolutions, because each time you double the resolution, you actually multiply the number of pixels (and memory size) by 4x.
Also, be wary of 32-bit textures, like HDRIs and high quality displacement maps, they are much heavier than standard textures.

Procedural textures (noise, voronoi, etc.) use almost no memory, because they are just maths, but they render slower than image textures.

A lot of the memory use in a scene will come from 3D models. There are a few features that make it easy to use lots of memory without realizing. Take a look at the modifiers in your scene:

-Are there lots of objects with subdivision surface, or maybe a single object that uses a high subdivision level? Each level increases memory use exponentially, so see if you can go lower.

-Other modifiers to be wary of: array, multiresolution, remesh, screw, ocean, fluid. Basically, the ones that add lots of data, expecially if you have them disabled in the viewport, they will all come back at render time and take lots of resources.

-Beware smoke simulations and .vdb volume files, they can be heavy, they are basically 3D textures.

Something else to check is instancing. If you have multiple objects that are identical and will always stay that way, you can duplicate them using alt+D instead of shift+D. This will create a linked duplicate (also called an instance).

Instances are objects that share the same mesh. If you go into edit mode and change one of them, they will all change. Even better, they will be counted in memory only once (but this only works if they don’t have modifiers on them). You will know you have done it if you duplicate an object and the scene’s polygon count doesn’t change.

if you already have multiple identical objects placed in the scene, it’s possible to make them into instances with the “link object data” tool.

1 Like

Thank’s a lot for all these explanations.
They will help me to go a step further to a “smart” use of Cycle. :wink:


1 Like

Blender scene optimization is a fascinating topic (mostly because I still fail at it). I wonder how Blender instances and draw calls will interact. Sure you can link the object data and it may improve Solid viewport interaction performance. But what about Eevee (GPU based)? Will it be a conflicting situation, since from a video adapter perspective, these objects are treated as multiple drawcalls?

1 Like

Judging from my past use of instances, I would say Eevee doesn’t benefit as much from them as Cycles. You will still get the memory saving effect and be able to render larger scenes than usual, but each instance will add to the render time.

By comparison, Cycles doesn’t care nearly as much for the polygon count and can support impressive amounts of instances. What Cycles fears is not heavy geometry, but rather complicated material and lighting setups, such as:

-Mesh lights
-Large amounts of light sources
-Glass materials and caustics
-Scenes where most of the light comes from a small window
-Large amounts of volumetric materials
-Multiple layers of transparent materials (in Cycles, fully modelled tree leaves render faster than trees with transparent textures, even if the polygon count is 10x larger)

-A special case that Cycles fears is long, narrow polygons placed diagonally. This messes with Cycles’ optimization system, which depends on polygon’s bounding boxes. If you create a bunch of long, narrow cylinders and rotate them by 45 degrees, you will get an extremely slow render. The solution in that case is to add loop cuts to the cylinders, making their polygons closer to square.


Thank you. Sad that Eevee doesnt benefit from instancing much, but this is to be expected. And a breakdown of what Cycles doesnt like will be useful if I’ll ever return to this renderer. Some factors you’ve mentioned were surprising


Thankfully, Eevee’s sheer speed means that it can still remain the faster option even in relatively heavy scenes, even after it gets slowed down quite a lot.

I should mention that many of the things I wrote are doable if you know the correct optimizations, or will be doable in future versions of Cycles:

-Mesh lights and multiple light sources are planned to be improved at some point by a feature that the developpers are working on (many lights sampling).
-Glass materials can be made less noisy with the “light path” node, you replace the material’s shadow with a less demanding transparent bsdf. Though they will always remain slower than opaque materials, because there are lots of light bounces inside the material.
-Small windows are helped by an existing feature known as “light portals”, which helps light rays find the windows (this only works for light emitted by the world background though).
-Volumetrics can be helped by denoising them separately from the rest of the render (this requires some compositing knowledge).
-Finally, my suggestion of fully modeling tree leaves can sound scary, but if it’s done using particle systems they can be deactivated in the viewport.

1 Like

Super interesting thread, I really enjoyed reading the questions / answers here !
I don’t have much to add.
Optimizing memory is “kinda” simple. It’s generally the high res image textures that can add a lot to the memory cost.
Then it’s geometry, that can bloat your VRAM, you’ve got margin tho…
I’ve compared the peak memory of a cube, and a cube subdivided x10 :

1 : 12 triangles, 110Mb of render peak memory.
2 : 12 582 912 triangles, 1631Mb of render peak memory.

On one project I got a Cuda Error / memory issue on renders. Turned out that was particles hairs strands ( on several characters). Because of hairs settings where you can subdivide each hairs, and the fact that you can multiply a lot of the initial hairs with children particles I filled the VRAM.

But in general you can already save some memory by working on the textures.
4k RGBA image = 64Mb in ram, 8k = 256Mb (64*4)
If one material use a 8k color , roughness, normal, and metallic image map.
It is already using ~1Gb of VRAM.

You can convert RGBA image to BW, in that case that 256Mb image is reduce to 64Mb.
If roughness and metallic are RGBA you can convert them to BW, and that 1000Mb memory usage will turn to 640Mb, without any visual changes.

What is quite less complex to optimize is render time because there is a lot at play here.
But for memory it’s a bit simpler , check geometry usage, image textures and you should be good !


Speaking of that, in game development, grayscale or float data (such as roughness) is sometimes packed into alpha channels of diffuse or normal textures. Will it be a good practice in Blender or just extra work? Since memory requirement should be the same as in case of having a separate 8-bit texture for that. Is there a benefit for having fewer textures overall?

Yes indeed ! That’s a good cheat to save memory.

In another thread we figured out that Eevee doesn’t optimize memory when dealing with BW images.
They are always stored as RGBA. But cycles take that into account, except that I think RGB vs RGBA doesn’t make a difference.

You just need to find a software / file format that doesn’t do too much work for you if you want to store that in the alpha.
Because you can get the color premultiplied by the alpha and then that screws up everything.
You should also makes sure you don’t want to change that channel. I mean it’s a bit more difficult to edit it and change it quickly ( at least given my PS/Gimp/Krita skills) .

At some point blender allowed that, I used that at compositing to export 4 BW passes into one png.
But they improved alpha handling and it was impossible to do it anymore.
Maybe now that’s possible again, I should look into it !

Given that you seems versed in game engine technology maybe you have a good technique to do correctly that RGBA channel packing, please share some info if you do !

1 Like

To preserve transparent colors in GIMP export, I make a copy of color channels before export and put it on a layer with 1% opacity. GIMP only discards colors when opacity is zero
And there are also alpha settings in Node properties of the shader editor. To prevent Blender from pre-multiplying a texture, you need to set it to “Channel Packed”. Thankfully there is this option

1 Like

Hey thanks a lot !
Quite interesting this little cheat with gimp !
At some point in blender, it was just a matter of plugging whatever you wanted in a combine RGBA node and save. It just wrote it to the final image without any alteration.
Maybe that’s possible in other apps as well without little hacks.