DeepDenoiser for Cycles


#93

Subsurface scattering is also missing in the link I posted, that’s what I wanted to point out. Since they are all added, the order does not matter.


#94

Finally, I have some visual results to share. That’s what the DeepDenoiser can do right now:


It is definitely not perfect, but a good starting point. For the training of the neural network, I didn’t use all the data yet, as I only wanted to find out whether it works at all.


(RaphaelBarros) #95

Wooow, that’s awesome! Thanks for developing this.


(0o00o0oo) #96

YES, I’ve seen some truly remarkable denoising from neural networks, I’m glad you’re trying to do it for Blender renders. Your result is definitely a great starting point!


(drgci) #97

Amazing work keep it up


#98

After some updates and more training, this is how it looks now:


The actual scene here was rendered with 64 spps, but the training so far only uses 16 and 32 spp examples, so maybe there is still more potential.


(ManuelGrad) #99

oh wow … amazing progress!


#100

Thanks a lot!


(ChameleonScales) #101

Impressive indeed.
Have you compared the render times with the current denoiser at similar final result ?


#102

I literally posted it when I got the results, that’s why I haven’t made controlled comparisons yet. It is also very unlikely that I am going to make direct comparisons, as that should be done by artists and not by an obviously biased programmer.
What I can say regarding times is that the DeepDenoiser took about 1-2 minutes of actual denoising. On top of that, Tensorflow needed a little more than one minute to prepare the computation graph and there was a ridiculous overhead of about 30 minutes due to some really slow Python routines which prepares the data. In the final C based implementation, the last two ones are not anymore needed and I expect something like 2-3 minutes for the denoising on my computer. This would certainly only be a starting point for further optimizations.
I am currently trying to bring the Python version up to speed, because it is a lot easier to experiment with it. If this is a success, I would make this version available for everyone to experiment with it. But as a warning, this would be an ugly command line tool only!


(-HENDRIX-) #103

Holy cow! Seriously great use of neural networks! For speeding up python implementations, vectorization with numpy is a good first step if you haven’t already done so. JIT compiling with numba gives insane performance boosts in some cases, too. I wrote a sinc resampler a while ago; its JIT-compiled numba version is 30x faster than raw python. Well worth looking into, and very rewarding effort-effect ratio (almost a one-liner)!


#104

Thanks for the suggestions! The implementation is extensively using numpy, if Tensorflow routines could not be used. That’s definitely a must for machine learning. And the Tensorflow code itself is already JIT compiled.
The slow part I mentioned is unfortunately within Tensorflow’s data processing where it flattens the data to be usable by the GPU. At least, that’s the understanding I currently have about it. I have a few ideas how this might be prevented. :crossed_fingers:
This issue is very high of my TODO list for obvious reasons. But, I believe it should be solvable. I wanted to be honest about the current timing and report the actual data.


(Cheinu) #105

This is nice actually. I hope you keep this one as well as the final denoiser since this one makes the render look like an oil painting.


#106

That’s from a very outdated version. Scroll a few posts up to see how the newer version looks :slight_smile:


(noki paike) #107

I think that here there is a small revolution of the paradigm … maybe it’s just the beginning of something much bigger that has yet to arrive and not only in the denoising :mage:


(Ace Dragon) #108

Is this going to scale up. Say I wanted each image pass to use more than 64 samples per pixel in order to obtain more clarity in the details, can I do that?

I ask this because a common pitfall in denoising algorithms is that they don’t scale up well to a large amount of samples.


#109

Deep learning in particular gives developers a tool to solve problems which were very hard or even impossible to tackle. You can call that a small revolution. I see it more as an extension of our toolbag.
The potential for Blender is huge in my opinion. There are many possible helpers for texturing workflows, animations and rendering which were demonstrated to work in research papers. Besides those, there are also “obvious” solutions which are likely not interesting for researchers, but could be huge time savers in practise.


#110

According to the authors of the paper, it scales very well. I haven’t reached a point yet where it makes sense to analyze that in detail.
http://drz.disneyresearch.com/~jnovak/publications/KPAL/index.html


(noki paike) #111

totally agree.

I make a comparison hazard that is not very appropriate:
these new approaches are a “hack” that catapult us into “speed of results” in something like quantum computing


(Cheinu) #112

Yeah I’m saying you should keep the outdated version as well because it has a nice effect that makes it look like an oil painting.