Cycles noise reduction idea

Instead of animating the seed value, type “#frame” into the seed value box.

Here is my test of the animated noise. I apologize for the low resolution. I probably should have scaled up to 720p. But you can still see the result. I think it looks slightly better with the animated seed (probably more realistic, too). Keep in mind, I haven’t done any noise removal and I did not turn on no caustics or any clamp value when I rendered.

I think it’s definitely better with the animated seed as we associate it more with grain than noise. However the grain is not very uniform throughout the frame as some materials and light bounces are inherently noisier, which breaks a bit the film grain illusion in my opinion. But for a quick render, this is a good trick.

This is the problem with noise removal tools like Neat Video. With non uniform noise, the noise reduction will not perform as well. However, I still decided to purchase Neat Video, as it looks amazing and I work a lot with video anyway, so it seems like a good purchase. I’m going to get it tomorrow, but while I’m waiting, I decided to download the demo. Neat Video is amazing when it comes to removing noise from video. There are lots of examples on their website, but just to give you a better idea, here is one of my own clips:


You can see how well it removes the noise while still preserving detail. I then decided to try it on some of my Cycles renders to see how it compared, and whether or not it exhibited the problem I mentioned above.



And here is the previous render comparison, but this time using Neat Video.

It’s definitely not perfect, as there is still some blurring and some traces of noise, but to be fair, I went way overboard both in the low amount of samples I used and the high amount of noise reduction. Again, I still think the animated seed looks better, and it probably always will due to the nature of how natural noise looks. I also think Neat Video might actually work better with the animated seed. Overall, I’m quite impressed with Neat Video!

Interesting test, it is I think a bit overboard on the blurring (reminds me of Dust & Scratches in After Effects) but as you said this is a worst-case scenario. Maybe the optimum workflow is a combination of all these techniques.

I wonder if multiple renders realy does reduce render time to achive things like this.
But on the other hand noise removal software can work quite well in video (but thats not directly the render engine).
If we can render with more noise and have reasonalbe noise reduction method in after render proces…
WEll then i wonder why isnt it in blender :slight_smile: ??

But there allready is free outstanding opensource software to reduce video noise with many free filters.
Its called virtualdub and there are many filters for it.
Where some reduce noise based on the current frame, where others compare frames before and after.

Well, we already do have the bilateral blur, which seems to work pretty well. What you’re talking about is spacial noise reduction vs. temporal noise reduction. Neat video is the best noise reduction I have ever seen, and it seems to employ both. I recommend it to anyone doing Animations with cycles. They clearly need help, as their website looks like it was made in the late 90s. :smiley:

Just remember with more complex scenes, rebuilding of the BVH will need to happen, as will loading in of images (and calculation of MIS), and compositing work… it may be quicker just to render double + x number of samples rather then two seperate passes.

That being said, we do it now and then when we got a deadline, to get something of everything out is better then having half of it out.

Very good point. If only there were some way automate the process so that extraneous tasks like BVH builds were somehow cached on each frame so that the image could be rendered multiple times with new seeds without all those extra steps. The only benefit is if you have multiple computers. Then you could render all the passes at the same time and really reduce your render time, almost like a render farm.
Also, I imagine if you performed noise reduction on each individual pass, you could get away with using less blur for each and thus make a clearer, albeit more noisy image. However, when combined with the other pass, the noise reduction might allow for higher detail retention since it’s being combined with another image of a different seed.

Rendering it multiple times and overlaying them is nothing new. It’s a method called Image Stacking and it’s been used in photography and animation for a long time now.

It makes sense in photography and helps a lot in many tricky situations like astrophotography, but in CG it’s not as big a deal.

More often than not, it won’t give you faster render times because it has to load the scene and build BVH each time you render, whereas just rendering the total samples in one go only has to do those calculations once.

However, this technique can be very useful when you have several computers to render one frame, or would like to render your animation progressively. I wrote an addon to handle this progressive animation rendering - it’s a bit old and not entirely finished to the point where I’m happy with it, but it’s perfectly functional and I still use it quite often.

I repeat- it only make things worse, not better. You must to continue Sobol QMC serie, exactly from last sample+1, not mix it with other random initial sample. This “method” will only ruine discrepancy property of QMC.

Doesn’t this only ‘work’ because he’s mixing the images in 8bits?

“He”? I’m still here, you know. :smiley:
And no, I tried in both a standard 8bit environment and a 32bit environment in both After Effects and Blender, and it has the same effect. In fact, for a final render, it seems there is a significant change in the lighting as a result of mixing two images in a 32bit environment. Not necessarily bad, but an important thing to note. I must stress, this was just an experiment and I hadn’t realized it was already a known and discussed technique when I thought of it. All I know is that it reduces the amount of perceivable noise in the image and has the benefit of being able to render separately. Though at this point I’ll most likely use an animated seed and Neat Video for my projects, as rendering the image twice does seem rather cumbersome.

One might render an imation on 50fps then in virtual dub use all kind of filters and then can reduce framespeed to 25

(i know NEAT; and i think Noise Ninja is eevn better), but they both are not Free

OR … HOWEVER… but …
Neat and Noise Ninja where not created to reduce the kind of noise as we see in Blender.
For noise reduction is important to understand the type of noise you want to reduce.
They where made for noise reduction in photo’s who most often have RGB noice, this is not as like the seed in blender
In blender the noise seams to happen only over the saturation channel (ond not on HEU).
And there some noise related not exact positioning (blur)
Because of that i think its even possible to write a custom filter noise reduction filter, which would be more easy.
Something like take average color, of area between frame (x) + - n > and but only if the color is not to far off (because of movements).

BTW i can program something like that on windows using C# (can be done in python too but i’m not that good in it anymore).
If people want to go the python way, there is the PIL image library this shouldnt be hard to program.
Be sure not to use RGB valeus but use HSL colors to base averages on; will work better, because of above notes.

Python would probably be slower any way (right?) - I think everyone would appreciate it if you write this script :slight_smile: If some linux/mac folks are truly desparate to use it as well, it probably wouldn’t be too difficult for them to port it themselves (right?).

I know little of these things :slight_smile:

If you do indeed write this script, and it’s successful, it might inspire someone to implement it as a node in the compositor.

As storm says (as some other dev told me, and as i’ve personally experienced) computing all samples at once gives the best result and best render times (your 6 second tests is not useful for a benchmark).

Neverthless I found a good use for averaging noise manually, since Cycles so far can’t save “states” of the render (in other words, if a crash or a blackout occurs, you lose your render even if it was at 99%!)

Here’s the tip: setup a 10 frames animation with animated seed. Use a 10th samples of the final quality you want, and render. at the end you get 10 stills to be averaged manually. It will be slightly noisier (aka less smooth) but on the other hand if a crash/blackout occurs you might have some stills already done. So you can still show something at your boss/customer, or you will need to render just the remaining frames to complete the 10 frames set