I’m annoyed by the denoising performance every time I do things with SSGI branch, so it’s pretty high in terms of things I want to improve. Unfortunately the first improvement I’ll try to do will do nothing for your use case.
The inner sample count is way higher than the sample count shows, combined with 2 unoptimized filters the huge number of texture samples tanks the viewport performance, doesn’t seem to affect render time that much.
What I plan to try out:
Denoise only at certain intervals or at max sample count - no performance hit in interactivity, better input to filters, so better output (could also possible get rid of the second filter stage then, so half less processing cost).
What would improve things overall: doing the blurring per directions separately, way less samples to do the same blur, minor artifacts in edges. Doing blur downscaled in first filter stage and bilateral upscale in second. I’d like to try both of those latter out also, but they’re way bigger time investments for me, so currently they aren’t on the table at least short term. Also I don’t think without better reprojection from previous frame they could do much better overall since the input is pretty bad at 1 render sample, just could use a higher number of blur samples without tanking the performance.
I think in this case the majority component is specular and like @Naskomusic said is already done by default SSR. The current publicly available versions don’t do any diffuse occlusion also, so the result can vary a lot in how you use the world lighting and probes. I would be interested in trying that file out though if you want to send it.
It would. But it’s kind of fighting with the current Eevee overall design - it’s usually assumes accumulating samples from rerendering one frame multiple times, instead of focusing on a lot more with robust reprojection.