Can someone explain memory usage

I’ve been around awhile, and I still don’t understand this. Take 2.80 (the same happens in 2.79, but the UI change means I have to pick one). I have a scene, and I hit render (F12 - not viewport). At the bottom right of the render window is a memory usage guage, and also at the top left. Bottom right, when rendering, is a dynamic value. but maxes out at around 13Gb. Top right maxes out at around 5Gb.

Using hybrid rendering (GPU/CPU) my GPU has 6Gb, and so I don’t expect this to render (y’know, 13Gb) but it does.

from observation, the bottom right value appears to be reporting VRAM. I say this because as I increase the render size, this inches up. Top left, however, is relatively stable.

So, what are each? Have I got it wrong completely, and bottom right is scene memory (which I can handle) and top left is render memory, and if so, why does bottom right crank up whilst building BVH?

Looking at the code, the bottom right info-bar appears to be reporting the main Blender total memory allocation (malloc() etc.) and the upper left value shown with in the render window during rendering is Cycles idea of how much memory the render is taking.

But I don’t know all the details of how Blender uses memory, or things like how buffers are managed during Rendering, where the BVH resides etc. Probably most things that reside in GPU memory also have to reside separately in CPU memory, and if you’re doing CPU or GPU+CPU rendering there may be some amount of per-thread stuff as well.

You could also look at your OS’s tools like top or Task Manager etc. to try to compare what they report for CPU and GPU utilization with what Blender reports.

Using GPU/CPU it uses the System RAM when it´s out of VRAM.
It is slower, but better than no render.

Also system tools or external programs to view your (V)RAM usage are much more precise than the Blender Info.

I am a bit confused here to be honest. I used to think that GPU + CPU rendering would solve out the memory issue but reality showed differently. Yes it does help but in really heavy scene I am still getting out of memory issues. Therefore I would be realy curious how exactly is GPU + CPU handling memory usage, because with just CPU I ak always able to render mo matter what :upside_down_face:

It’s purpose was to speed up render times I think, which it does for me. Memory is still limited to the device with the lowest amount.

Strictly speaking, “yes and no.” External measures of a process’s memor usage can be over-stated because operating systems are designed to be lazy. If memory is available to be used, it will be used, “because that’s what memory is for.”

But memory is also "virtual." A program can think that it has (say) “20GB” of memory allocated when there’s only (say) “16GB” of memory in the computer, which of course must be shared by everyone. The operating system does this by maintaining an illusion for each process, while managing the actual resource in ways that the processes themselves cannot see, using disk-storage in addition to physical RAM. But also, if memory usage “spikes” at (say) 12GB then goes down to 4GB, but there’s no pressure for the main-memory resource, the “lazy” operating system won’t go out of its way to “clean things up” until and unless it has to: memory consumption will be over-stated until pressure does materialize.

An application’s self-reported tally of its own memory usage is probably the most accurate – but remembering that (except for VRAM, which is a GPU resource) it subject to the illusion. An external memory/resource monitor might be looking at the operating system’s view of things, instead.

Yes indeed, but it still helps memory aswell, things is I would like to know how exactly(or atleast roughly).

Bear in mind that hybrid rendering is still a work in progress afaik, so maybe this change in the future.

For heavy scenes I would prefer to render with CPU only instead of GPU/CPU.

Okay, you make a good point.
In this context the (lazy) system tools are the more accurate once (even if they aren´t), because if these go out of memory, the scene won´t render.

1 Like

The CPU+GPU rendering option in modern cycles (selecting multiple CUDA or multiple OpenCL devices including CPU devices) has nothing to do with memory and won’t behave any differently as long as you have a GPU device selected. It simply brings your CPU along as a separate rendering engine in parallel to your GPU(s). But the GPU memory behavior should not be affected by whether or not you do this.

What does make a difference is that modern Cycles (in the 2.79 and 2.80 experimental builds) is now able to make some use of out-of-core memory when you run out of dedicated GPU memory. This is automatic and should always happen regardless of your preference settings. It’s not a magic solution and doesn’t always seem to work that well, but it does mean that your GPU memory is not a hard limit anymore.

The documentation suggests that you might see a 30% performance reduction once it starts using off-card memory to fetch textures or whatever, but it could also be worse, and you can still get out of memory conditions (from my experience even when you have one big HDRI that doesn’t quite fit) and there are some things that still have to fit in GPU memory for the render to be able to work.

1 Like

this would explain behaviour I am having. Good to know thanks. By the way do you have any source for these information? I would really like to read on a bit of this stuff.

Cheers T.

I do that (CPU) but it is simply slower and generally way slower than other CPU render engines like V-ray or Corona to achiev similar results. I hope e-cycles will get improved in terms of CPU aswell (more improved) or I might aswell buy that v-ray node for blender (it is not that expensive) but Chaosgroup is putting it’s updates a bit behind… :).

There isn’t any single source that I’ve found. Here’s the original patch that gave Cycles support for using system memory that has some information in the description.

It would be nice if there was a comprehensive overview of Blender/Cycles memory use on the developer wiki or someplace like that, but I don’t know of anything currently.

1 Like

Hmm that is a bit generalized, but still better than nothing and atleast I have a rough insight over this. Thanks and let’s hope it is gonna be further improved (obviously right now Devs have a whole lot of another work to take care of).

And wow, here I was hoping for a simple “it’s because” answer, but it seems to be a bit hit and miss. Clearly, Cycles system RAM usage is a work in progress if using hybrid rendering, so now, for optimisation purposes, it would be nice to know what MUST reside in VRAM. If, for example, it’s bitmapped textures, then the solution is to lower the texture size where it won’t materially affect the final result, but I’m sure there’s more.

For example, do any procedural textures have to reside in VRAM, or worse, are they dynamically converted to bitmaps (giving very little control over memory usage)?

Pragmatically, part of the concern is – “if the task at hand won’t fit entirely in the GPU for this particular machine, what do we do?” Can we use the GPU at all? The answer strives to be, somehow, “yes.”

Now, “the CPU” owns RAM, while “the GPU” owns VRAM. When we want to move information into and out of VRAM memory, the CPU has to become involved.

In the modern era, the answer to all computer performance questions is usually “it’s complicated.” so simple answers can be elusive. Transfers between a PCI-E peripheral like the video card and system memory can take place with little or no involvement by the CPU for example.

Almost all of the structures and information that the GPU holds will start out on the CPU side. All the scene data of course, but also the BHV is calculated on the CPU, and all the buffers for holding each pass exist in CPU memory to hold the results from the GPU. You can probably think of the CPU as holding the master copy of everything with the GPU caching what it needs at the moment to perform evaluation of the pixels that it has been assigned to work on (generally this is pretty much everything needed for the whole frame I think).

Procedural textures should never exist as an actual 2d image that gets stored somewhere and then applied. Each time a value from the procedural texture is needed it will run the nodes that define the texture to calculate a precise value for the exact location being sampled. Generally this can be done entirely on the GPU as there are native implementations of all the nodes that are involved that can run on the GPU. Thus such textures don’t have a specific resolution and can be computed for any level of detail that you like.