I’ve been experimenting with different settings, trying to get the fastest rendering at resolution 1920 x 1080 in Cycles. Using a GTX 480, CUDA enabled, 1.5 GB memory. Computer is Windows 64-bit, XEON with 2 x dual cores, 20GB of RAM. I’m monitoring the behavior of GPU load and memory with GPU-Z
I’ve noticed curious behavior – at 50% of the 1920 x 1080 resolution, my renders run with GPU load fluctuating between 35 and 98 percent, and seem to have a sweet spot of tile size at 1024 x 1024 (in other words, just one tile). But at 100% resolution, the GPU load falls back to 0% to 60%, with MOST of the time spent down at less than 10%, with just occasional spikes up higher than 60%, and the sweet spot for tile size seems to be 256 x 256. When it’s live-rendering in “object render” mode in the viewport (for example, panning the scene) the GPU load is continuously 93% to 98%.
VRAM usage for this scene seems fairly constant at about 700.
The 50% resolution render takes about 20 seconds. The 100% resolution takes about 160 seconds.
I get the sense that as I jump up to a full resolution, “real” render, it’s more than just rendering more pixels – the render process seems less efficient. It’s like it’s actually making use of the GPU only a fraction of the time.
I’m pretty new to Cycles – can anyone shed some light for me on why this is, and hopefully, some things I can try to boost the efficiency when doing the final high-quality renders at the 1920 x 1080 size?
For image quality of the scene, I’m running with caustics turned ON, Global Illumination with 128 bounces, Motion Blur ON, 120 samples. I’m currently using AutoDetect Threads, Dynamic BVH.