[Cycles] Amazing denoiser that dramatically reduces noise and retains edges sharpness

Hello there.
I’m a python programmer and I made some tests as a proof of concept of a high-quality noise reducer for Cycles. I’m attaching a simple test but the theory is what matters, here is what the program does:

-It separates the individual faces on images that will be used as a kind of “mask” for the denoiser.
-It discards small faces, atm it just leaves them noisy, but in the future may apply the denoise over whole objects (for high-poly objects), atm it doesn’t take into account smoothed faces, they should generate a single “mask”.
-Then it applies a 10px blur to the “masked” image and over this one it inserts the original image (so you get a 10px blurred outline with roughly the same colours of the noisy image, this is to make sure the denoiser won’t generate artifacts).
-Then it applies the denoise (in this case a closed-source commercial one, but an opensource could be used also) to the individual faces, so the noise gets blurred but the edges stay perfectly sharp (or almost) after applying the respective masks and merging them into a single image. (the mask doesn’t have AA though)

Of course, when there’s a fine texture there, the thing just wouldn’t work. To solve that problem the GI should be rendered separately, denoised, and multiplied by a “flat” image with the textures and stuff. The denoiser should only be applied to the GI image, of course it is not going to be perfect, but if the theory is correct then it’s worth loosing some accuracy in exchange of noise.

Alternatively, there could be a way to adjust the denoising value manually for individual objects also.

Here are my tests, please note that I couldn’t apply a high denoising value because that would blur the direct illumination too much. Ideally this shouldn’t happen because It shouldn’t be applied to the final image but only to the secondary GI bounces. AND this is an extreme case of a very noisy image.

A fine grain noise could be applied over the entire image also (it tricks the eye).

Next i’m gonna render a cornell box.

Left: Original output, Right: Denoised output (yes, the input is the image on the left :D)




PS: Sorry the JPG format, but the PNG was too big and the JPG is in very high quality.

There’s some weird stuff going on at the edges of the finished image.

I hope you’re not talking about the lack of AA because that should be neglected at the moment because it’s not using AA.
Besides that, yes, there are some minor artifacts, but when you can render a completely noiseless (or almost) image at 1/500 (3min vs 24 hours) of the time Cycles alone takes to clean the noise, I’m sure you won’t be noticing that :).

There is pixel noise(i mean visible cracks between nearby pixels that in reality are smooth almost constant color surface ), but as soon as Global Illumination utilized, there are too many other noises, that can not be addressed by any 2d fancy filter, as shadow penumbra quality, 2-3 and more order color bleed quality, color dynamic range (only can degrade with any filter), moire-like artifacts with crisp texture at bad angle. Image just not have extra data, you can not pull that data from nowhere or even from nearby pixels. I suspect that “smart blur” maybe with some form of median filter can do same, and it already exist.

Actually nope, afaik it doesn’t exist already because this method uses geometry data.
As I said, this denoiser was intended only for secondary bounces, so an image with textures and all stuff is rendered but without GI and then the denoised GI could be applied as a multiplier. Of course it’s going to be unbiased, but pixel-perfect secondary bounces lighting isn’t needed for 99% of cases I think.
Maybe it wasn’t a good idea…

Clamp level is set to 0 your going to get a lot of noise. can not compute, don’t go past what that the eye can’t see 1.3 or ,so.

Hey Repgahroll, I think what you did looks pretty sick! I mean, I know people have pointed out that yes, there are various other methods like median blur, but it’s always good to have a new way to selective smooth out Cycles noise. A real test would be for a few seconds of video, perhaps with a lighting change (sunlight cycled overhead?). What commercial denoising technique did you use?

Great results so far. I would love to see further developments!

Thank you guys. However, I was able to achieve a similar result by using only a complex composing nodes setup and the Normal, Z, Direct, Indirect and Col maps separately. The edges aren’t perfect also, but it’s somewhat good imho.
Here are the images, notice how the texture is perfectly sharp (because the lighting is composited separately from the color map and then multiplied.




A good denoise node and a better map to use as the bilateral blur determinator (atm i’m using Z Mixed by Normal (Fac:0.9)) would create a better result I think.

PS: When compositing, how can I save the image without rendering another one? (the file output node seems to save only after rendering)

F3 from image editor does that. It saves currently displayed image. Yeah, it is completely not obvious =\

About your results… What da hell, I just love it!
Do you planning an option for region denoising?

Thank you 1D_Inc.
Yeah, I forgot to link the composited output that’s why it wasn’t working :spin:.
I made another extreme (please notice the EXTREME) test with a very noisy image. Here are the results using only Blender:




What’s really useful is that you can filter the passes separately, and also using normals data to retain the edges, it’s possible to achieve really impressive results in a few seconds.
You know, the problem with all unbiased renderers is they take time to clean secondary+ bounces, so you can just blur them over the geometry and as long as you don’t rely too much on indirect illumination, the scene end up very good looking.
In this case the scene was rendered for ONLY 2 SECONDS on GPU, it ended up very noisy, but the composition reduced them to a point that imho it’s acceptable on the Suzanne.
Obviously, you should ideally render a little longer and apply more subtle filter, thus retaining more precision and detail.
Next time I’m gonna try real-life stuff.
Thanks.

What happens if the surface is textured? Could then only ao/light/shadow layer be smoothed… how does it work exactly?

I also noticed from the pictures that the problem occurs at the edges. I’m not sure how hard would it be to extract bordering edges of the masks (faces) that you already use, and apply some kind of a vector blur along those edges. Just an idea?

But considering what this does those are just small suggestions, not criticism. I’m very interested in this script since my computer is somewhat slow, lol. I think that the idea is great and haven’t got a clue it could be done, especially in python. Great job!

If you plan on releasing this, I would like to get my hands on this.

Again, great job man!

Hehe you answered my question while I was typing it :smiley:

Thank you Duy3. Please note that only the first image was filtered using my “script”, the others are completely done only in Blender using compositing nodes and filtering (in Blender) the passes separately.
If the surface is textured, the textured doesn’t get filtered, only the illumination that is applied “over” it (multiplied by the Color pass), so it should just work. I’m not sure though how this would work with displace/bump maps.
The edges are a problem apart, an edge detector on the normal pass could be a good way to retrieve a “mask” of the more pronounced edges, but I don’t think Blender has a node for that nor a way to apply a “mask” over an image also.
At the moment I’m playing only with Blender, maybe there’s a way to add nodes with a plugin or something, this would be the best way to implement my method, but I’m not sure. The way to do that would be to add a “denoise” node that has two inputs, one is the pass (e.g.: indirect lighting pass) and the other is the “faces map” which is a slightly modified normal pass. It would work more or less like the bilateral blur.
Thanks.

i like the result… can you share your script+blend, so we can test it?
and how fast is it with 1 million faces?

Thank you TS1234.
The script is a mess at the moment, and it’s making use of two proprietary binaries plus imagemagick, It’s nothing special really, it just does what is described on the first post.
With 1 million faces it’s as fast as it gets, because it will just discard faces that are too small on the image, but of course if the resolution is too high then it’s surely going to be slow.
I’m playing with composing nodes and It seems that making a simple “bilateral denoiser”, something like a bilateral blur but instead of blur it applies a denoising algorithm would do, even using the ordinary normal map + z map.

the others are completely done only in Blender using compositing nodes and filtering (in Blender) the passes separately

…wait… what?!?

Would you please shed some light on how you filtered the noise in blender (obviously I’m missing something big)

This reminds me a lot of Kai Kostack’s Noise Remover nodeGroup. I can’t find it on the internet anymore, but have his script redone. He uses normal as well, together with bilateral blur… i added a firefly remover. you can download it here. image here.

@Duy3: At the moment I’m simplifying the thing a little bit because there are lotsa nodes that aren’t necessary. It’s nothing really special, it just filters render passes separately using bilateral blur and recompose them.
@bashi: Cool man! Yeah, it’s somewhat similar to my nodes, but that composition doesn’t preserve the details too much.

Here is another test, a 15-sec render plus some few seconds to composite (2 or 3).



i think repgahroll’s filter is MUCH better…
i had not good results with bilateral filter - only useful for preview and when i scale down image by 50%