Experimental 2.77 Cycles Denoising build

BTW Google released the Nik photo filters for free: https://www.google.com/nikcollection/

It includes a denoise plug-in as well.

MarcoG - nice rendering - can you share the file? I would love to give it a test drive.

OK… found the bug!
Properties > Render > Dimensions > Border - MUST BE UNCHECKED!!!
otherwise it crashes, look at the above problems (post#36) :wink:

Test results @128 samples

LWR default

I wonder if it would go nicely together with Adaptive sampling :wink:

First image 128 samples ( so a very noisy render) second image same but with denoise. It cant do magic but it certainly improves the render.


I love you

Hey. Dunno if this is a good place to put this, but what I did was download the blender sources through git, then downloaded your source from git, and then copied the files from your source code over the ones from the blender main tree. I made all the dependencies and everything.

After compiling about 8% through, I get a bunch of warnings being treated as errors, those warnings being “conversion to ‘float’ from ‘double’ may alter its value.”

What do I do?

Part of the algorithm is actually the generation of a noise map that can be reused for weighted sampling, if I recall correctly.

Well just tried it out, can’t show test results yet, but it’s very… similar to Cinema 4D’s system,

What does LWR stand for?

Local Weighted Regression. (Just check the paper linked in the first post :wink: .)

Sorry for the silence, time to answer everything :wink:

First of all, thanks for all the testing so far! Great to see that people like it!

Regarding the individual pass denoising: It’s really broken in the current version, I’ve already found and fixed the bug.

Regarding OSX builds: The buildbot seems to have a configuration problem with CUDA, so no OSX builds yet, sorry…

Regarding compiling errors: Apparently I overlooked a few implicit float/double/int conversions that make some compilers throw an error - which is weird since here, both GCC and MSVC were happy with the code. But yep, will of course be fixed.

Regarding black spots: I couldn’t reproduce black spots in this file on Linux, time to boot up Windows again :smiley: I have seen these appear in other scenes, though - I’ll try to track it down.

Regarding progressive refine: Yes, the filter will be applied after every sample, I didn’t really think of that - could be fixed, but to be honest, why would you use progressive refine anyways? It’s slower, and especially with the prepass option in this build…

Regarding access to the unfiltered buffer: Yes, that’s possible - I actually had that implemented for a while, but removed it (I don’t remember why, though). I’ll add it for the next version.

Regarding crashes with border rendering: I changed some stuff in that area, it’s possible that I missed something (although border rendering works here). I’ll check again.

Regarding adaptive sampling: Yes, can be added, the filtering code produces an estimate of the error anyways (that’s how it chooses the amount of smoothing internally, it estimates the error for a few settings and chooses the best one). That could indeed be used for adaptive stopping - but supporting that from the rendering code itself is a lot of work, so I’ll most likely not add it to this demo - for the GSoC proposal, though, it’s certainly interesting.

Oh, and good news: I tested a bit with YAFUs material file (thanks, by the way) and found the issue: The code is mixing up fine detail in the normal pass with a noisy normal pass (which doesn’t even really exist), that’s why it filters this material (and the upper middle one) so strongly and also one of the reasons why it might lose detail in hair/fur or on other normal maps. So, I just removed the whole normal variance estimation (same for the texture pass, for similar reasons) - turns out this reduces the memory overhead and produces way better results in some cases. So, again, thanks for the hint :wink:

The new version might take a bit, I’m a bit busy IRL the next ~3 days - but stay tuned :smiley:

This looks great. But is there a reason it’s not simply a compositor node? That could be more flexible. But perhaps it has to be integrated more deeply into the renderer than that would allow?

That would also make the handling of passes simpler because each pass can already be handled as required by the compositor and then robust tuneable LWR nodes could handle denoising of any or all passes as required.

From what I have gathered, this may be one of those denoising algorithms that can’t be done as a compositing node because it uses data that is only available during the rendering phase. Then there’s the fact that would kill off any chance of adding that adaptive sampling component.

On the results, they look very encouraging so far in terms of just how much speedup there is in the convergence rate (and this is just a first patch).

does this work with the preview / cycles rendering in 3D viewport as well?

This algorithm (and nearly all modern denoising algorithms researched for Monte Carlo renderers) require information that is only available during rendering. Check Lukas’ GSoC proposal page for the cons of doing this in compositing.

Nice to hear that you’ve been able to solve this problem. By the way, the materials that I used in the file are materials from the community and collected by Meta-Androcto.
Really excited with your project and waiting for these news and Linux build :slight_smile:
Thank you.

It could indeed be handled in the compositor, but that would increase memory requirements a lot, along with needing some larger changes to the render pass code, which is a bit much for a single GSoC project.
However, these changes are planned for the future - as well as at least considering to move denoising to the compositor. As a first step, though, the Cycles implementation should be good enough - and large parts of it can be reused if it eventually ends up in the compo.

I believe I didn’t link the proposal yet - you can find it here. The part I mentioned above isn’t in there yet, but will be added soon.

Regarding viewport: No, the filtering is just too slow for that. There are alternative, faster (but less accurate) denoisers that could be used for the Viewport in the future, but LWR isn’t viewport-friendly at all.

Upper: 100 samples, no DENOISE.
Below: 100 samples, LWR, the first parameter set to 10.

Hope your great work can be combined to official blender release soon:evilgrin:!!!


Hi Lukas! Looks great from the tests! Good luck with the GSOC! Is it possible to get both the unfiltered ‘raw’ render and the filtered result in one go? Or do we need to render two times to compare?

That would be super cool!

@enjaren, cekuhnen

It is already on the agenda :slight_smile: