How is GPU memory usage calculated?

I attempted to load up some scenes and render them in cycles as a performance test and got the dreaded “CUDA error: out of memory in cuArrayCreate” message.

I have a 650 Ti boost with 2 gig of ram and I’ve had that occur in scenes with a peak of 1400 meg, 700 meg, and even 500 meg reported in the frame status bar.

If 500 meg is still capable of returning that memory error message, just how is the memory handed over to CUDA?

I’m running a Win 10 system with 16 gig of ram, the 2 gig 650 card, and an FX-8350 CPU.
Nvidia driver is: 356.43

I have a feeling it’s not a fault of Blender but that there has been more nerfing of CUDA by nvidia. Is that the case?

Blender can only report the amount of memory it asked the driver for. It can’t track how much the driver actually allocated for various reasons, and how much VRAM was already in use by other applications. …or that’s the gist of what I was told when I asked.

Ah, that makes sense :slight_smile:

AFAIK you can’t trust Blender’s internal memory calculator. If the memory consumption peaks beyond the card’s VRAM and causes a crash, the memory peak shown in Blender will very likely be the state right before that last “crash” peak and not the true maximum. Better use an external tool like GPU-Z.

Hate to say this, but 2GB VRAM are very much the bare minimum these days. Not only does the scene have to fit into that memory, but also the CUDA kernel. And that grows larger and larger with every feature added to Cycles’ repertoire…

That’s why you e. g. might want to avoid using the “experimental” feature set, as that kernel is even larger. “Progressive Refine” is another memory hog - tiled rendering will use much less VRAM.

Use the CPU. You might get a slower overall time but it’s much more challenging, scene wise, to flood it enough to make it shut off before results happen.

… or at least, adopt the Grand Canyon Principle: “plan to stay well back from ‘The Edge!’” :smiley: