Hi guys, as the title says. Couldn’t find a similar problem around.
Basically switching from CPU to GPU the results with the GPU are incredibly less clean to the point that for me it’s almos unusable.
Blender 2.78b, but tested with 2.76, same thing happens.
GTX 1080Ti aorus drivers 384.76
Links to images (sorry tried to upload here but got error at the end many times):
CPU, denoise OFF
CPU denoise ON
GPU, denoise OFF
GPU, denoise ON
And link to the blend file:
Hope somebody can help.
EDIT: added the pictures, now they should be visible.
This is unfortunately expected behavior. The CPU code path has an optimization that allows for lower noise in volume scattering (see https://www.solidangle.com/research/egsr2012_volume.pdf for very technical details). That method relies on functionality (dynamic memory allocations) that doesn’t have a practical equivalent on the GPU.
I’m curious, how does it work when rendering GPU + CPU then? Is it turned off?
Just tested gpu+cpu and you are right, the thingy there is turn off and the results is the same as gpu only.
So now I am curious, how would you tackle something like that? Is there a way to render only the volumetric with the cpu and the rest with the cpu?
@skw: thanks a lot for confirming that the results are from gpu’s behaviour. Is there any mention of that anywhere? I think it’s something very important to be left like a notion floating around…never heard any mention of that in any tutorial about volumetric I have seen…maybe I got unlucky.
You can probably render the volumetrics with the CPU in a separate pass.