How to blend denoised image with noisy image?

Hi

I have read on internet that there is a way out to get back.lost details by blending de noised image with noisy image.
Can someone help me how to this?

Instead of denoising with the checkbox in the render settings, you do it in the compositor. Then, you can mix the noisy and denoised images together in whatever proportions you want with a mix node.

This isn’t magic, it might just alter slightly the look of your render and make it slightly more crisp. It’s still important to get a good render.

Also, make sure you are denoising with OIDN and not with the Optix denoiser. OIDN has better image quality, Optix is faster and suited for previews.




If you are looking to keep details through denoising, the double resolution trick works like nothing else.

1- Set your render resolution to 200%.

2- Reduce the sampling quality (noise treshold, max samples) until it takes the same amount of time to render as before. You can get away with this because the extra pixels increase the image quality.

3- Denoise and save the image at 200%. The extra pixels will allow the denoiser to better perceive and keep fine details.

4- Reduce the image back to its intended resolution in an image editing program like Photoshop. You will have a clearer image than if you had denoised it at intended size.




I should also mention the pixel filter, as that setting has an effect on how sharp a render looks. You will find it in the render settings’ “film” section.

The default of 1.5 means that pixels are slightly blurred (each pixel will sample half way across its neighbors). A value of 1 means that each pixel keeps exactly to itself, no blur. A value below 1 reduces the anti-aliasing, making the render increasingly more pixellated.

3 Likes

Thank you for your answer.

Like.you have mentioned making resolution 200%
Works really great but still creates some destruction.
In places where i get most destruction:
Materials: fabrics since they are really tiny most of the time i get destruction the destruction is not all over but in some parts there are.
I havent tried yet but dont know if working with udims only in fabrics might work?
Roughnes most of the time when i have a material with roughness i can clearly see destruction of reflected object.
And when a sun hits any material even though you still.can see the detail’s they are not washed out by light, but denoiser creates destruction.

Example: this is real photo taken an example where light hits you still see details but denoiser creates destruction.

The problem in this particular scene is the fact that it’s lit mostly by narrow beams of sun on the wall. This is a weakness of Cycles, being a unidirectional path tracer.

Cycles work by shooting light rays through the scene in reverse. This saves greatly on the number of rays needed to complete a render. If rays started from the light sources, most of them would be wasted outside the camera’s view. However, it also makes it harder to find light sources in some situations.

Each ray start from the camera, 1 per pixel each sample. It then gets shot into the scene until it hits a diffuse surface. At that point, the ray will choose a light source that Cycles think is likely to affect that surface and send a shadow ray towards it to verify if that light source does affect that point, or if it’s shadowed from that light. Then, the ray will keep going for the second bounce, re-creating itself in a random direction and continuing until it hits a second surface, where it will also verify if it’s in shadow and so on.

In a scene where you have a patch of sun on the wall, that patch pretty much acts as an area light, except that it’s contained entirely in the second bounce and Cycles doesn’t know it’s there. This means that only the rays that randomly happen to hit that patch of light will successfully contribute to the render, and the other ones get lost and leave the render noisy.

There are a few solutions:

1- Brute force. You can eventually manage to render this kind of scene with higher sampling quality. The most important thing is to increase the “min samples”. In a scene where the noise has lots of dark gaps, you need to have enough min samples for those gaps to fill in with something before the noise threshold kicks in. I think something like this should do it.
settings

2- Path guiding. This feature is available only for the CPU for now, but it’s built to help Cycles find bright patches like this and could help this render. If you switch Cycles to CPU, you will see the option appear in the render settings (in the “sampling” section). Just activate it and render with default settings. It’s going to take much longer to render (mostly because of CPU), but you will get a cleaner result for the same samples. For now, I wouldn’t use this for an animation, but it can be useful for rendering a still image.

3- Cheat the light manually. Actually put some weak area lights on the walls where the patches of light are. This is less realistic and will brighten the scene a little, but it will clear some of the noise.

4- Bake the light to a texture for the wall. This can only be done if your lighting is final and will not change. This is more complicated, but really does help reduce the overall noise in an interior room if your scene allows it, even for objects that aren’t baked.

Here is an example file, with the same room baked and non-baked. You will see how much of a difference it makes, even with only one surface baked.
bake_example.blend (4.6 MB)

here is a thread where I explain the process.

2 Likes

The interest of UDIMs is to have different resolutions on different parts of the model. This is most useful for characters, as you might need a closeup on the face or hand, so those parts need a lot more resolution than the legs.

Usually, procedural materials work really well for fabrics, as they don’t have a resolution limiting the fine detail of the fabric threads. If not, at least using tiling image textures can help get the needed resolution.

If a semi-rough reflection appears glitchy and incomplete, that means you need more min samples in your render settings. Min samples are there to make sure the noise patterns appear completely before the noise treshold starts being used.

If the noise treshold activates before the more complex reflections are complete, it can misjudge the gaps in the noise as being clean pixels and stop their sampling too soon. This leaves gaps in the noise that the denoiser cannot compensate for.

1 Like

I have made a render with denoiser off. I have used 4000 sampling
with portals and path guiding on.Results are incredeible, and they do look
like very realistic without so much effort. From my perspective when denoiser
hits it really makes the scene unrealstic no matter if you have perfeect lightintg,
good texture and good modeling.But 4000 sampling took 4h and still left
some noise on my scene.My approach is to let the denoise do the job as little as possible.
Even though if there is a little bit noise it is not a problem.
My question is except adding more sampling, how to minimaze the denoiser effect.
How to know when to make min sampling 100 or higher, or noise threshold 0.02 or higher.
Can i control noise manually in composition i thing in this case this might be better option
as far as i know you can add a node to let some noise on your scene not clearing all over
and make it look like unrealstic.

Have you tried different pre-filter options for the denoiser? If your scene can take it, the “none” option preserves the image best. The only problem it has is that it can leave a small amount of noise in reflective and refractive surfaces, but this shouldn’t be a problem if you use high samples.

I wouldn’t use the “fast” option, I have never found it to give a good result. “Accurate” removes the most noise, but can blur some surfaces a bit.

Path guiding is really slow at this point, being only for the CPU, and is there mostly so you can test it and see what it can do. There are plans to eventually port it to GPU, but only once it’s feature complete and can guide every type of ray. At the current time, there is a good chance that rendering for equal time on the GPU would give you a cleaner image, for the simple reason that you are going to get something like 5 times the samples in the same amount of time. Though that’s really scene dependant.

You will know you lack min samples if you see gaps in the noise, like this:


This mostly happens in interior scenes, especially if there are reflections of reflections. This means that Cycles didn’t have enough time to observe the noise before the treshold activated and thought the empty pixels were noise-free (but actually, they were noise-free because the noise didn’t even have time to appear).

That one is based on the average noise of the image. The noise treshold works by disabling further rendering on individual pixels when they are judged to be below a certain amount of noise. This helps make the render faster by not wasting time on areas that are already clean.

Are the average surfaces of the render clean and noise free at the end? If yes, the noise treshold is low enough (low treshold = low amount of noise tolerated, which means high quality). If the average surface stays noisy no matter how many max samples you use, then the quality is being limited by the treshold and it needs to be lowered. If the average surface becomes perfectly noise free even before the render is finished, you can probably get away with a higher treshold and get a faster render.

I like to use 0.02 as a starting value, because I have found it to give an amount of noise that denoises well, but you might want a higher value for a faster render, or a lower value for more quality.

If the average surfaces of the render are clean but there are a few difficult areas that remain noisy, the “max samples” setting is what will help. Are the samples starting to get faster and faster toward the end of the rendering? That means lots of pixels are finished and are getting deactivated; at that point, only the handful of noisiest parts of the image are still getting samples. The “max samples” setting lets you decide how long you want to let that late phase of the render go. Do you want to let those little difficult corners complete to perfection, or are they just holding the render hostage for little benefit? If you set the max samples high enough, eventually every part of the image will be stopped by the treshold and will in theory have the same amount of noise everywhere.

But if you don’t care so much about render time and want a render that’s guaranteed artifact-free, you can also choose to disable the noise treshold (uncheck the checkbox next to it). This will reduce the settings to a single “samples” setting. Pixels will never get deactivated: you loose the time saving effect, but will also never get the problems that can come with it.

As far as I know, it’s the way I have shown. If you saw someone use a single node to do it, maybe it was actually a custom node group?

Or maybe what you saw was someone adding camera grain to the denoised image? That makes a render more realistic, as real cameras do have grain. But camera grain isn’t the same as render noise, it’s not the same pattern.

2 Likes

[quote=“etn249, post:7, topic:1554281”]
You will know you lack min samples if you see gaps in the noise, like this:

This mostly happens in interior scenes, especially if there are reflections of reflections. This means that Cycles didn’t have enough time to observe the noise before the treshold activated and thought the empty pixels were noise-free (but actually, they were noise-free because the noise didn’t even have time to appear).

So does this mean that if you wait till all these gaps are filled with noise then your denoiser does not create destruction on this part

Well, the denoiser could still give slightly blurry results if the noise isn’t cleared enough, but yes, having noise filled in everywhere will save you from having blatant artifacts, like this.

1 Like

Its me again :smile:

After getting very great information from you i made a test for my render.
With resolution 4k, 1000 sampling, 100 minimum sampling for more calculation and 0.02 noise threshold.
And i can not see any artifacts on my scene and from technical view i think this render looks quite realistic. The most important thing is that i have put a lamp with fabric texture and it is all good there is no any artifacts on it.
What is your opinion about this ?

1 Like

What you have linked does seems pretty technically flawless in its rendering. I see no noise or artifact anywhere either. This does look realistic, the main thing keeping this render from full realism now is just the room being a bit unnaturally empty. If you were to have an empty room like this in real life, it would manage to look fake in real life.

Apply the render knowledge to a fully decorated room and you will have a truly realistic render.

Thank you for your feedback. The reason why i didn’t put any decorations was that i wanted to play with sampling to test how they really work.
Less object faster render. Now i will be working with my new scene and get the render that i want. A big thanks to you

1 Like

Surprisingly, this is less the case than you might think in Cycles. Raytracing is really good at dealing with large numbers of polygons and doesn’t get as much of a performance hit from it as rasterization (the technique Eevee uses).

The types of surfaces visible to the camera (glossy, glass, volume) and the layout of the lighting in the scene (many lights, complex mesh lights, sun coming through a narrow window) have a bigger impact on render time than the polygon count.




Also, now that I think of it, I should explain something. The reason the denoiser is causing damage to a real photo is because it lacks the extra data that it would have with a render.

passes

The denoiser uses the albedo and normal passes of the render to better identify and preserve details. Those are absent from a real photo, so it’s always going to slightly damage a photo in a way it doesn’t do with a render. If there is a fine fabric pattern in a photo, it has no way to tell that pattern isn’t noise in the absence of the passes.

3 Likes

I’d like to use this topic for saying that your knowledge is absolute stunning and reading all your amazing post across the whole BA it’s always VERY instructive.
Thank you @etn249

2 Likes

Hi @etn249

I didnt want to open new topic. Just want to ask you a question.

I remembered that you told me and founded the answer from you that there is not so much impact on ray tracing engine when there is much more geometry. I am consider of using displacement map rather then normal map because it creates much more realism true shadows etc. I am making subdivisions render like 5 on viewport i leave 2. And i made a test that it does not really slow down the render speed.
Since i have told you my main goal is realism over speed. I dont say that i can render anything whole day but rendering 2 to 3h not a problem.
What do you think is it worth of doing this?

1 Like

It’s true using lots of subdivisions won’t slow down the render as much as you would expect, but you will eventually run out of memory and crash.

If you are going to use fine subdivision like this, you will want to learn about adaptive subdivision. It’s a feature of Blender that’s a bit hidden, it allows the subdivisions to automatically be adjusted based on distance from camera. It can even subdivide different parts of a single object by different amounts.

This is very useful for rendering high detail displacements. Close faces get subdivided more, far away faces where you wouldn’t see the detail anyway receive less effort. This allows for extreme subdivision at relatively low cost.

–

To activate it, you first need to set Cycles to experimental mode.
experimental

Once that is done, go to your subdivision modifier. It will need to be last in the modifier list for the adaptive mode to be available.
modifier

Before you switch to adaptive, first set the “render” levels to 0. Last I checked, there still was a problem / oversight with the UI where both the adaptive and regular subdivision will be used together. You want the adaptive mode to do everything for best control and performance.

After switching, you will see the settings change completely.
dicing_scale

Adaptive subdivision doesn’t work with levels, it works with a “dicing scale” (there are viewport levels, those apply only in solid mode). The dicing scale represents how big each polygon will be in the render, in pixels. This means that the default of 1 will subdivide each face until every polygon is about 1 pixel wide in the render. If you used 2, you would get lower quality, as each polygon would be 2 pixels wide in the render. 1 is usually a good default value for fine subdivision.

You will notice there is a different scale for the viewport. It has lower quality, to keep the viewport performance good.
viewport

This viewport rate will apply when you go in render preview. You can control the viewport scale in the main render settings. A new “subdivision” section will have appeared.

new_section

Those main dicing rate settings act as multipliers for every adaptive modifier in the scene and can be used to globally change the quality of the effect. You will notice there is also an offscreen scale to control the quality outside the camera’s view (can be useful if you have objects visible in reflections or shadows) and a max subdivisions setting that can cap the max amount of detail. regular subdivision could never do 12 levels, but adaptive can, you can just keep zooming closer and closer to a surface and it will keep getting subdivided finer.

Finally, the “dicing camera” allows you to pick a specific camera from which to apply the effect. If you don’t pick one, the active camera will be used instead.

–

When you use adaptive subdivision, you can use a displacement node in your material and it will automatically work with it (as long as you have set the displacement mode).
disp_mode

If you use a dicing scale of 1, you can use “displacement only” and get fine grainy details fully from the displacement. A cheaper alternative that looks almost as good is to use “displacement and bump” and pair it with a coarser dicing scale of 3 or 4. The displacement will then take care of the bigger features and the bump will do the finest details.

–

Some extra thoughts on adaptive subdivision.

  • It can also be used simply for subdividing objects, even without displacement. If you have lots of subdivided objects in a scene, you can use it to improve performance. For this purpose, a dicing scale of 3 or 4 should be enough.

  • The quality of the effect is based on pixel size, so it depends on the render resolution. Just a warning that if you increase the render resolution, the amount of subdivision also increases to match. You might need to adjust the global multiplier if that’s a problem.

2 Likes

Thank you so much about your answer.

What do you think is it worth using displacement map? I did a test and it looks much better.
And also denoiser recognise much better the details. As i found out with displacement map you dont need to add more samples and play with sampling settings to preserve the details from denoiser. With normal map denoiser somehow flattens the texture and makes unrealistic. With displacement map even lower sampling works great.

And also there are real shadows. With displacement map you can make realism to next level.

1 Like

If you have a texture with deep details, like a stone wall, displacement is absolutely worth it, especially with adaptive subdivision. I would usually keep normal maps for realtime rendering or for very fine details (too fine to see the difference).

There is one flaw of displacements that I have to mention though: the model has to be built for it. It has to be fully subdividable and shaded smooth and cannot have any sharp edge. Also, texture seams can be more problematic, as true displacement can turn them into ridges and cracks.

When the detail is real geometry, The denoising’s albedo and normal passes are going to capture more detail. This helps denoising.

It’s still possible to denoise normal maps well, but you need either more samples or using the double resolution trick (I explain it in my first post in this thread).

1 Like