Big Denoise test - Intel Open Image Denoise vs others!

The Noisy Image pass is meant to provide a despeckled image, so if there’s a lighting effect with extremely low convergence, then the despeckler will interpret the brighter pixels as ‘fireflies’ and remove them.

Its purpose with Lukas’ denoiser is to avoid the issue where you would have bright squares all over the image. To resolve this usually means rendering with more samples.


Thanks for the clarifications. :+1:

I also rendered that out. Here the result:

1 Like

I did not know that. Thank you.
Here the test with a scene with many fireflies:

Image output:

Noisy Image output:


Stefan Werner fixed the problem of excessive RAM usage. At 1920x1080 Blender uses about 3GB in my system with LordOdin nodes, with bit of penalty time compared to the previous version. Fix will be available in new buildbot builds tonight (or maybe before for some platforms).


This is so awesome.
Works perfectly.

Sorry for the naive question, but I can’t find any specific answer. Is this denoiser only compatible with Intel cpus? Are there any tests with AMD cpus?

This works for both CPUs. You can try it by downloading builds from buildbot.

1 Like

Thanks for the info. By the way, I downloaded your BMW scene but after rendering is done I get a white image. Am I doing something wrong?

No, that should not happen.
You make sure you are using the latest builds downloaded from here:

If it still does not work with the latest build on your computer, it should be reported. What is your OS?

@birdnamnam . Sorry, I tried the wrong blender version. Could you try again with latest builds from buildbot?

1 Like

Everything worked ok. I’m literally speechless…

OK, my results in @YAFU’s test are:
CPU: 5960X @ 4.2GHz,
RAM: 32gb (4x8gb) Corsair Vengeance 2400/C14
GPU: Asus GTX 1070 Strix OC 8gb
OS: W10 Pro, x64

CPU+GPU: 8.22 sec
CPU only: 9.70 sec

I also tried the same scene with 200 samples and 100% FHD. It finished in 1 min and the quality was tremendous. These are great days for blender artists for sure! I’d like to try one of my old heavy scenes with a fraction of samples and compare the quality. If I make it, I’ll post here.

1 Like

Latest Buildbot is showing some promise, but still some way to go. LordODIN tree is much improved regarding memory, but still has some issues.

  1. Out of the box, it destroys volumetrics
  2. Hair particles don’t benefit

I can overcome 1. with minor modifications to the nodes. I haven’t found a way past 2. yet.

This scene has both volumetrics and har, and volumetrics are much better. Hair is still dead. Having said that, I’m working from 16 samples, so still not too bad.

Raw render

Modified LordOdin

And my modified node tree:

Apologies if you saw this in the 5 minutes between post and edit. I’d made an error. Now fixed. The mushy mess in front of the fire is a rug made with hair. Dunno how to fix that except more samples.


Hi Metin - did you get this resolved? have downloaded ODIN for Win and Thread Building Blocks - but not managed to figure where to place what for a successful build!

Final result. 64 spp rendered at 5568X3132 resolution. 9 mins 40 sec, about 3 mins for OIDN.
Then i ran it through AI Gigapixel upscaled 2X then downscaled to 3840x2160.

Here is the original noisy image or actually no so noisy even at just 64 spp.

Here is the node setup.

Maximum RAM usage was about 18 - 20 GB whilst running OIDN, and 50% cpu usage.
Fullscale global illumination was on, with no light clipping, filter glossy 0.5. caustics on. 8K HDRI.

By the way i just finished up making this record player from scratch, the amazing wood material is procedural from Timber Lite. Timber - A Realistic Procedural PBR Wood Material (Download links included)

1 Like

Did you install the 2.81 master build from this page? Then use the Denoise node in the compositor on your rendered image, as shown in several node setups in this discussion.

But maybe I don’t understand your question correctly.

Great to see Timber out in the wild! I hope you’re enjoying it!

1 Like

The best oidn result could be reached by giving the correct albedo and normal input. Here the content from the oidn documentation.

There are a lot of blender graphs around but only one is delivering parts of the inputs the denoiser really needs to preserve max details. (see.graph for albedo values but must be always between 0 and 1.")

Using auxiliary feature images like albedo and normal helps preserving fine details and textures in the image thus can significantly improve denoising quality. These images should typically contain feature values for the first hit (i.e. the surface which is directly visible) per pixel. This works well for most surfaces but does not provide any benefits for reflections and objects visible through transparent surfaces (compared to just using the color as input). However, in certain cases this issue can be fixed by storing feature values for a subsequent hit (i.e. the reflection and/or refraction) instead of the first hit. For example, it usually works well to follow perfect specular ( delta ) paths and store features for the first diffuse or glossy surface hit instead (e.g. for perfect specular dielectrics and mirrors). This can greatly improve the quality of reflections and transmission. We will describe this approach in more detail in the following subsections.

The auxiliary feature images should be as noise-free as possible. It is not a strict requirement but too much noise in the feature images may cause residual noise in the output. Also, all feature images should use the same pixel reconstruction filter as the color image. Using a properly anti-aliased color image but aliased albedo or normal images will likely introduce artifacts around edges.

Example noisy color image rendered using unidirectional path tracing (512 spp). Scene by Evermotion.

Example output image denoised using color and auxiliary feature images (albedo and normal).


The albedo image image is the feature image that usually provides the biggest quality improvement. It should contain the approximate color of the surfaces independent of illumination and viewing angle.

For simple matte surfaces this means using the diffuse color/texture as the albedo. For other, more complex surfaces it is not always obvious what is the best way to compute the albedo, but the denoising filter is flexibile to a certain extent and works well with differently computed albedos. Thus it is not necessary to compute the strict, exact albedo values but must be always between 0 and 1.

Example albedo image obtained using the first hit. Note that the albedos of all transparent surfaces are 1.

Example albedo image obtained using the first diffuse or glossy (non-delta) hit. Note that the albedos of perfect specular (delta) transparent surfaces are computed as the Fresnel blend of the reflected and transmitted albedos.

For metallic surfaces the albedo should be either the reflectivity at normal incidence (e.g. from the artist friendly metallic Fresnel model) or the average reflectivity; or if these are constant (not textured) or unknown, the albedo can be simply 1 as well.

The albedo for dielectric surfaces (e.g. glass) should be either 1 or, if the surface is perfect specular (i.e. has a delta BSDF), the Fresnel blend of the reflected and transmitted albedos (as previously discussed). The latter usually works better but only if it does not introduce too much additional noise due to random sampling. Thus we recommend to split the path into a reflected and a transmitted path at the first hit, and perhaps fall back to an albedo of 1 for subsequent dielectric hits, to avoid noise. The reflected albedo in itself can be used for mirror-like surfaces as well.

The albedo for layered surfaces can be computed as the weighted sum of the albedos of the individual layers. Non-absorbing clear coat layers can be simply ignored (or the albedo of the perfect specular reflection can be used as well) but absorption should be taken into account.

This is a first approach to deliver an better albedo in blender

but it solves only
“exact albedo values but must be always between 0 and 1.”
The Bold marked “metallic and dielectric surfaces(glas), reflected albedo and albedo for layered surfaces” must be done extra to preserve max details.


The normal image should contain the shading normals of the surfaces either in world-space or view-space. It is recommended to include normal maps to preserve as much detail as possible.

Just like any other input image, the normal image should be anti-aliased (i.e. by accumulating the normalized normals per pixel). The final accumulated normals do not have to be normalized but must be in a range symmetric about 0 (i.e. normals mapped to [0, 1] are not acceptable and must be remapped to e.g. [−1, 1]).

Similar to the albedo, the normal can be stored for either the first or a subsequent hit (if the first hit has a perfect specular/delta BSDF).

Example normal image obtained using the first hit (the values are actually in [−1, 1] but were mapped to [0, 1] for illustration purposes).

Example normal image obtained using the first diffuse or glossy (non-delta) hit. Note that the normals of perfect specular (delta) transparent surfaces are computed as the Fresnel blend of the reflected and transmitted


I would also like to report that the new denoiser can not denoise fur or hair. It just turns everthing into a mushy paste. No matter how many samples you use. Scenes that rely on grass or characters with fur or hair can not be denoised with this. Blender’s Denoiser will have to be used for those scenes sadly.

lol, no. Internal adds unnecesary details. That’s a dessert, not white mud.