# A thought about noise... i have an idea!

Look, cycles has a fixed pathtracing noise that can be set and it will never change from frame to frame (the noise pattern). Hmmm… makes me think, if this is the case, and we know the noise pattern precisely, cant we just mathematically put the noise to zero ? Cause we know the pattern deterministic.
Yes i know that the rays that hit and come back are introducing themselves a noise but could that not reduce it to a minimum ?

Any ideas ?

Well, indeed when you add some noise to an image , if you exactly know the noise pattern you can cancel that out by doing the invert operation…

So basically you start with a completely clean image, and then introduce noise for whatever reason, and eventually you can cancel that to retrieve the clean image.
But it’s not exactly what is happening in cycles, basically each sample we get some color information about the scene that are basically noisy, then we average each sample to even out the noise .

In the first scenario we can cancel out the noise because the final image is already there, in the second the final isn’t formed yet , we don’t know exactly what is the real color of a pixel, we can’t cancel out the noise like in the first scenario because the noise isn’t added to a perfectly clean image , the noise is in fact gradually removed at each sample.

If that’s not clear enough lets consider that image :

say it’s a texture that is mapped in a plane that takes one pixel on screen…
So basically what is going to be the color of that pixel ?

Basically the texture will be sampled and maybe it will pick the blue color, then the next sample it will pick a color different color say red, and then these two color are averaged, and until we throw enough samples to average correctly the color , that pixel is very likely to change color by a lot at each sample. And it’s going to be exactly the same for the neighboring pixels… The catch here is that we don’t have a clue of the final color yet until enough samples are averaged so it’s impossible to cancel the noise …

Sort of piggy backing on what @sozap said.

There’s two questions I’d like to answer. One is “Why does a path tracer work the way it does?” and the other is “What’s a random seed?”

Firstly, very simply, we see things because a ton of light that have emitted from something either directly or indirectly (by bouncing off surfaces, passing through media, etc) entered our eyes. This interaction can’t really be stimulated on a computer, because you’d never get a solution (in a reasonable time) because you’d have light that bounces around an infinite amount of times, and you have practically an infinite number of light rays to deal with.

Something has to give, so we introduce a maximum amount of bounces and a maximum amount of light rays. If you have a sufficiently complex scene, then it can be fairly easy to get pixels that are right next to each other to have very different results (causing noise) because each pixel didn’t agree on what it saw (one pixel might have rays that bounced off a blue wall, and the pixel beside it might not have). When you increase the amount of rays via samples, then each pixel will become more and more similar to the neighbor pixels because they are going to have similar patterns of ray paths due to the large number of rays per pixel. This is why a low sample count scene is very noisy (neighbor pixels don’t agree on what they see) and high sample count is less noisy (more neighbor pixels agree on seeing similar things).

So a path tracer is all about randomness and probability. We generally use different types of random number generators called pseudo random number generators or quasi random number generators that give number sequences that look random. If you take a dice and start with a number face up, roll it, get the result, reset to the original number, and finally track the results you’ll see that no matter what the initial state, the number will always be different. With a pRNG algorithm, if you start with a specific seed, then every time you roll the dice, the sequence will be the same every single time. This will cause the noise to be stable over the frames, but this doesn’t mean that the results of frame A will let you know about the results of frame B.

However, if you’re okay with sacrificing some quality and correctness, you could use some information from previous frames to project them onto the next frame. Games do this all the time for some things, but you can get ghosting artifacts when it doesn’t project well enough.

Thanks guys, i understand, still there is one thing. If you render lets say 128 Samples there is some noise and if you let the seed stay the same from frame to frame you can clearly see the seed like an overlay over the frames standing still.
And i didnt see any tool to remove (at least) that noise pattern that can be removed. I hope you know what i mean. TILL the noise is removed (high enough samples) it will stay as the static seed. (and you can never remove it completely cause you would need infinite samples, and i noticed that OIDN works better without the seed noise).

So what we would need is that cycles would use the seed noise to remove it (invert calculus) at the end of the rendering of the frame, THEN apply OIDN.

I successfully removed it in after effects. But we could do it at render time. Yes the sampling noise remains, but is much cleaner without the seed noise.

Hum, it’s basically the same problem in animation or in static image, we can’t remove the noise because it’s a byproduct of the missing information : we need more samples to get a clear image.

What is possible to do however is to do some temporal denoising, that is using the surrounding frames as “extra samples” to reduce the noise.
But it’s part of the denoising process not the rendering process.

Blender can do temporal denoise but it’s a bit complicated to setup and you need to export each frame as an exr and process them in command line.
You can also use neat video plugin in after effect to remove the remaining noise.

i think you’re confusing render noise as being the same as a the black and white noise texture. the actual noise is just directions to shoot rays at every sample, and the noise ‘emerges’ from every interaction of those rays. but there’s no direct way to separate the noise from the signal from the results.

now we are pattern detecting creatures, and are pretty good at inferring the signal and the noise ourselves when we look at an image, and thats basically what OIDN is doing. ideally allowing ODIN to consider adjacent frames when its coming up with its result would make it even better at guessing what the true result should be. but ultimately thats what we, and denoisers are doing - guessing.

3 Likes