I’ve got an issue with a scene (that I’ve seen before) that is really stumping me.
The scene is quite complex, but when I render on CPU only, my system memory shows about 30GB usage, render goes fine. When I switch to the GPUs (2x Radeon VII, trying both with HBCC enabled and disabled), my system usage skyrockets to high 50s, usually around 59GB, and then the render simply doesn’t move forward after a couple tiles (256 preset in Auto Tile Size for GPU, 32 for CPU).
Could someone shed some light on this subject and why it seems that the memory usage nearly doubles while rending from the GPUs?
Here’s a still of my system in Task Manager while rendering on GPU as well as a still of the scene itself (rendered from the CPU of course):
And here’s memory usage while only rendering with CPU: