RTX 3080: System is out of GPU memory however i have plenty left in shared GPU memory?

I constantly run into out-of-GPU memory issues. As stated in the Blender Docs , it should be able to use shared memory.

However, it seems to hit an invisible barrier since it will only use about 8 - 8.5GB of shared memory, even though I have 56GB available for it to use. This stops the render and gives the error “System out of GPU memory.”

I’m running CUDA cores, so it should be using shared memory without any problem, and it does use some of it.

Is there a limitation preventing me from using 2x my GPU VRAM? Or is there a specific setting I need to enable to utilize all of my RAM as shared GPU memory?

When upgrading to Blender 4.0, the error seemed to be gone. However, it came back after using K-Cycles, and I suspect it might be a K-Cycles problem. However, neither Blender nor K-Cycles renders anymore, and both use the same amount of shared memory.

System: i7 9770k, RTX 3080 10GB, 128GB DDR4 3666MHz.

The scene uses around 24GB of RAM when open and renders fine with CPU. But 4-10x the render time.

Thank you for any help anyone can provide!

I don’t know exactly, but shared memory seems to allocate half of the system’s memory.

This is not set separately, but it seems to be automatic. (Shared memory setting has nothing to do with blender) :thinking:

In my case, it’s VRAM 3Gb, system memory 24Gb, and shared memory 12Gb.

※ Shared memory usage results in performance degradation and instability.

Thank you for the reply.

It might just be a instability issue. However it just seems weird to me that it caps out a 18gb when the system says it has a total of 74 gb GPU Memory. Instead of acutally using it.

I believe at the beginning the Out of Core feature worked only to a certain limit, for example, it could only offload the geometry or textures. Don’t remember which one, but not both. So this could be your invisible “limit”. Would have to dig more, to see if the feature was improved later on or not. I would run into the same issue from time to time. Sometimes the out of core feature performs, sometimes it errors out. This has been the case since the feature was introduced and not just the latest few versions of Blender.

Cheers, that sounds like a plausible solution.

I will have a look at my scene and see. Maybe run a few diagnostics in regards to texture usage and geometry and optimize on these.

This way i at least have somewhat of an idea what makes it give out.
Thank you for the clarification!

Good luck.

Yes there is. Nividia GPU’s using NVlink can pool their memory into one big GPU memory. Most GPU’s cannot use NVlink. 3090’s and the expensive “pro” ones can. Not sure about the 4000 series.

Out of Core memory is a way for GPU’s to use system RAM. Not the same as pooling their on board GPU memories. As Kim noted blender does Out of Core for some things and it will slow the render.

Rendering with multiple GPU’s increases render speed but does not combine GPU memories without NVlink.

Blender 4.X is an alpha release. You should not expect it to work well, be complete, nor be stable.

Thank you for the comment.

I was aware of the NVlink being the only way to truly increase vram without just buying a new gpu.

It just seems too “perfect” that it gives the error when i hit around 8 - 10 gb of shared memory. Same as the vram.

So to specify i was not specifically asking about vram but rather if Out of core memory is limited in capacity in regards to the vram or not.

I thought that since blender 4.0 had the official release it was out of alpha? Regardless i had more trouble in 3.5 and 3.6 with this specific error than i’ve had in 4.0

I still appreciate your comment and adding to the conversation. Someone else might also find it useful.

4.0 is official? Good! I’ll try it at 4.1 or 4.2.