DeepDenoiser for Cycles

(Piotr Adamowicz) #141

That’s really not the same as generating an image of arbitrary geometry with all light interactions and complicated node trees accounted for, reliably and consistently. :slight_smile:

I’m not saying it will never be possible, but I also won’t hold my breath.

But let’s not derail the thread.


Theoretically, a neural network could be trained to learn this. In practise, it would most likely be significantly slower, at least with the tools we have currently available. That’s why I don’t see a benefit in this approach at the moment.
It would be ridiculously complicated and require an unacceptable amount of computational power to train such a network with the technology we have available today. A company like Google might be able to get some results, but I doubt they could produce high quality renders with such an approach.

Instead of trying to replace path tracers like Cycles, there has been a lot of work to enhance them with machine learning. I believe that we are going to see a lot of renderers going towards this direction.
The denoiser paper from Disney and Pixar which I am replicating also contains a whole section which is dedicated to adaptive sampling. Adaptive sampling is really complicated because there is no precise known way how to deal with it. Instead it requires some sort of guessing and analyzing what might be best in general. Or in other words, some kind of “computer intuition” would help a lot. And that’s exactly what neural networks are really good for.
There is also interesting work to use neural networks to help with importance sample. Though this is not yet as advanced and improvements are likely still needed before it becomes practically relevant.

(lolwel21) #143

This This may have been answered already, but…
Do you (plan to) support animation/multi-frame denoising?
The Disney denoiser is amazing, but even it introduces flickering at low-ish sample counts (16-32 spp) for animations.


Denoising animations is definitely on my TODO list! But this is going to take quite some time and resources.
Every denoiser introduces flickering when you have very low sample counts. The results from Disney are most likely not their best ones. They are mentioning a slight variation of their approach in the paper which seems to work better for low sample counts and that’s the one I am using. So it might be, that slightly better results are possible.

(lolwel21) #145

Agreed. I personally think that at least the animation denoiser should expect variance levels somewhere in the order of the Disney tests, or somewhere in the order of your White Room test and your Fishy Cat test.
No use trying for much lower sample count than that, though. At that point you’re pretty much just fabricating the image from scratch.

(kakapo) #146

my question was only half-serious. :slight_smile:

i find this thread very interesting and the recent example results are really impressive.


With a topic like machine learning, it is impossible for me to judge how serious a comment or question is meant. Some people have expectations which are way beyond what is doable, that’s why I take everything as serious.

Thanks for the compliment :slight_smile:


(noki paike) #149

tell us the rendering times, just to understand the advantages


It took about 3 minutes. 50% of the time was spent in Cycles, the remaining time for the DeepDenoiser.
The denoising does not take that long though. There is still quite some overhead which I need to get rid of.

(lacilaci86) #151

Do you use some extra passes for some additional detail preservation? Could this be used with luxcore render?


Yes, there is additional information that is required for the denoising. Right now, the input consists of:

  • Pass you want to denoise (and variance)
  • Normal (and variance)
  • Pass embedding (like a flag)

All the recent AI based denoisers are also using albedo/color as additional input. I implemented the option to use it as input as well. So far, I did not see benefits in the training, which is kind of surprising. It might be that I need to train longer to notice a difference.

This project could be relatively easily modified and used for other renderers as well. It would be required to create all the training data for every renderer at the moment. Theoretically, this could be improved, but that’s not going to be my focus for now.

(Photox) #153

Look really good!

I think that you should also put a third image which is the current denoiser.


If you want, feel free to do it on your own:

  1. Use Blender’s Master Build
  2. Download and open the scene:
  3. Switch from “Branched Path Tracing” to “Path Tracing”
  4. Set “Render” under “Samples” to 16
  5. Use whatever denoising settings that works well
  6. Render

To get the same result every time, I had to jump to frame 10 and back to 11 before rendering.

As mentioned previously, I am not posting comparisons with Cycles’ denoiser due to a lack of experience and an obvious bias.

(lacilaci86) #155

did you compare results against nvidia denoiser?

(Photox) #156

To my knowledge there are no configs for the current denoiser, it’s a checkbox.
[edit:] Wrong! there are configs!

Although because you need to enable so many other render layers for yours it’s not a fair comparison. But it seems like such an obvious benchmark.

I’ll take a look, just eyeballing (in my head) I’d say the current denoiser would produce a blurrier denoised image than yours.


I haven’t compared it with the OptiX denoiser yet.


In Cycles’ denoiser, you can configure “Radius”, “Feature Strength”, “Strength” and “Relative filter” as well as pick the direct and indirect passes (which you could optimize in the compositor).
That’s too much for me to feel comfortable to easily achieve a fair comparison.

(Photox) #159

Holy crap, I didn’t realize you could expand that panel! So there is. I was just checking or unchecking the checkbox.

(captainkirk) #160

In your denoiser are there multiple settings or is all of that adjusted automatically?