Moiré effects are certainly very difficult and I haven’t tested them at all so far. If you or someone else sends me an example with high frequency details which I can just render (and denoise), I will give it a try and post the results here. I would be surprised if the results were as good as the ones I have previously posted.
Thanks to @Photox for providing the scene and combining the renders together into those images!
WARNING
This is the first time I am publicly showing a direct comparison between the DeepDenoiser and Cycles’ denoiser. I want everyone to be aware of the fact that this is an unfair and not a representative comparison! It only shows the denoisers on one kind of image with the default settings. I am confident, that the results for Cycles’ denoiser could be improved! This comparison also ignores the performance and all sorts of other factors which are essential for accurate representations!
What is the memory footprint for deepdenoiser and how is the performance?
Do you think it could run in viewport/realtime similar to optix?
Would it work with very high resolutions? Like 10K x 10K pixels?
Lastly, is there a way to test this on my own?
B.Y.O.B
(Node Preview and LuxCore Addon Developer)
184
As far as I know Cycles doesn’t do any adaptive sampling, so all pixels should have exactly the same amount of samples.
@DeepBlender I think it would be good to also show the ground truth (a noise-free result without denoising).
I can’t give you precise numbers on that one yet.
The memory footprint is definitely significant. All passes/AOVs are split into tiles and are denoised separately. Without that, it wouldn’t be possible for me to denoise on my GTX 850M due to memory restrictions. On the CPU, it would work, but it is considerably slower.
All renders I have posted in this whole thread took somewhere between 1 and 3 minutes to denoise. This includes some performance overheads I have to get rid of. I can’t give exact numbers, because I am still running everything in a debug like mode. I am working on an optimized version though which doesn’t contain ugly overheads.
It could definitely be used in the viewport, but I have no idea yet how fast it would be. Besides the performance overhead, I also haven’t optimized it yet.
Yes. As mentioned, it automatically tiles the passes/AOVs and because of this, there are no restrictions for high resolutions.
I am working on that. It is really painful to use it right now. That’s why I am simplifying it, which most likely has the benefit that I am going to get rid of some of the overheads as well.
Are all those images rendered with the same settings/color grading?
There are some sprinkles that only seem to appear in the 10,000 sample images,
and more noticeably,
The noisy and denoised images all seem to have Default color grading while the 10,000 sample image has Filmic color grading.
Is this the case, or do the pixels actually get darker on the whipped cream as you raise the sample count?
Uh, oh. The 10,000 sample image was done on my computer. I had packed the file and sent it to deepblender, who rendered the other ones. Perhaps deepblender should render a 10,000 sample raw image on his system to match the others, I can update the combined reference images.
Man, even with all the caveats and unpolished code, that is a stunning result.
I’ve gotta admit, I was doubtful of the efficacy of an AI based approach, especially for a one man programming team. But you have really pulled it off well.
With results like these, you could probably drum up enough excitement to get a little Patreon going. I’d give you $5 a month to keep working on that, and I doubt I’m alone…