Cycles noise reduction idea

Allow me to preface this post by clarifying that I have zero experience in coding, let alone complex path tracing render engines, and I certainly don’t know how Cycles works, so I do apologize if the idea you’re about to read is extremely stupid. Now, to my main point:

As a user whose graphics card is not yet supported in Cycles (I use a 2011 iMac with an AMD card), I often try to think of ways to reduce the noise in my image without having to render for hours. I’ve tried noise reduction in After Effects, but it never produces a good result, as that method involves a selective blurring of non-detailed areas. Ever since I learned about the Seed setting in the sampling menu, I’ve wondered why, if a noise pattern can be generated, can’t it also be used to smooth out a grainy image.

I had an idea that goes something like this. By rendering two images at the same number of samples but different seeds, overlaying the images and averaging their color values, you could potentially achieve a much less noisy image with faster render times. I decided to test it out and see if it could potentially work. I rendered a simple scene at 25 samples. I then changed the seed and rendered again. I brought both images into the compositor and combined them with a mix node at 50%. I also rendered a control image to compare with the composite. Here are my results:


As you can see, the combined render time of the first two images (13.62 seconds) is still slightly faster than the image with twice as many samples, and I’m not sure about this, but I think it looks slightly clearer.

Let’s say you were rendering an animation with 250 frames. Using this method (assuming it were automated), you would only save one minute, but if there were a better way mathematically to average the pixel values, you might be able to get an even better image. I tried extracting a noise map using a color difference transfer mode, but wasn’t able to implement it in any useful way. Again, I know nothing about coding or rendering, but I just thought it was an interesting idea and I wanted to share it! Let me know what you think! I apologize for the rather incoherent and nonsensical post: it’s 2:49 in the morning and I have nothing else to do! :smiley:

interesting. I’m not a coder either but I like your creativity (and also have an amd) :slight_smile:

Please add the full images for AB and 50samples for better comparison. I think there are better noise reduction tools than adobe’s. I’'ll check when on a desktop machine

By rendering two images at the same number of samples but different seeds, overlaying the images and averaging their color values, you could potentially achieve a much less noisy image with faster render times

If it true, that mean Cycles do something wrong in random sequence scrambling. In fact you will always lose important information with that trick, as you replacing one of the known top quality quasy random generator, “Sobol sequence,” by dumb averaging of “short” subsets, using exactly same Sobol numbers (you always start at 0 every time). From theoretical point of view, you only lose “discrepancy” level with that. If you really interesting in such matter, try to google “Quality of quasy number sequences”, it is very hot topic in science, as needed to faster solving many problems all around world, not only rendering. But it is same time very complex and hard to improve.

If you look at it mathematicly, doing two 50 sample pictures and mixing then together or doing a 100 sample picture is for the most part the same, the noise patterns will be a little bit different but visualy there’s not much noticable difference. So people only realy use it if you have a render that is still too noisy and you can’t continue rendering so you do a second render and mix them together.

Noise reduction is a tricky thing, as you can’t distinguish between noise and high frequency details from textures for example, there is research being done and there are some papers out there with algorithms that work amazingly well for single pictures, but as soon as you’re doing animations they dont work anymore (adds time based artifacts, flickering etc.).

You can do some basic simple bilateral blur on surfaces in the compositor, i’ve seen this done in a tutorial, but don’t have a link ready right now.

Thanks for the replies everyone! Very interesting (and confusing) stuff! :smiley:

acrocosm: I will post them shortly!

gexwing: Bilateral blur is good but again has the problem of blurring smooth areas while leaving the detailed areas still noisy, and losing high frequency detail in blurred areas. What if blender generated a noise map against a grey background, inverted it, and placed it on top of the render with the overlay mode? Do you suppose the darker/brigher pixels would cancel each other out?

There’s a noise removal plugin for after effects called Neat Video which works in a similar way. It creates a noise profile and uses it to cancel out the existing noise. Check out some of their examples! But because it’s designed for video, I don’t know if it would work with blender renders, as I’m sure the noise pattern is very different from noise produced by an image sensor.

This tutorial shows how to use Bilateral Blur with nodes:

I think he mention that After effects filter.

I do not know much about Blender yet, but I have noticed that it is very difficult to set the correct use of the Bilateral Blur filter in different scenarios, regarding the determination of the boundaries of each object or material for example.

acrocosm: Here you are! (i’m sorry, I don’t know how to make them any smaller)


25 samples A

25 samples B

50 samples

Thanks so much for that link to the tutorial! I’ll have to try that out, but I might as well get neat video. It looks awesome!

Why is it that Cycles doesn’t simply check if a pixel is too different from it’s surrounding 8 pixels and then if it is shoot a few more rays for it?

I would imagine because it would increase render times drastically. Assuming your image isn’t uniform in color and brightness values, cycles would likely end up just shooting more rays to the entire thing, no differently than just a very high number of samples. Furthermore, what about textures and surfaces with lots of contrasted details? This would surely render that function useless. I suppose you could use some sort of selective sampling based on various properties, but this is basically what the non progressive integrator is (or is it progressive? I’m not too familiar with it). The goal is to have the best looking renders with the fastest render times, and it seems the best way to do it at this point is with post processing. One good trick of course is that if your shot is moving, using the vector blur node to introduce motion blur will hide a lot of the noise, but again. it might not be what you want, and only works for video.

But would it increase the render time more than rendering extra samples for the whole picture?

I guess it depends on what you’re rendering. But I imagine the main problem is that it would be difficult to implement. Again though. I don’t know about coding or any of that, so I may be wrong. I tried generating something of a noise profile in After Effects using the difference transfer mode between a (essentially) noise free version of the render and the 25 samples image. This is my result.


I suppose this is representative of the varying levels of noise in the image. The brighter a pixel is, the more noise is present in that pixel, as it is being compared to a noise free image. Now obviously this isn’t a useful technique, as it requires a noise free version of the image, but it does show the different amounts of noise. The most noise occurs in the shadows and translucent Suzanne. If there were a way to tell blender to adjust only the pixels with the corresponding noise, you might be able to get a relatively clean image in much less time! Again though, at a certain point I suppose you just have to accept that render times will be long no matter what…

Someone recently did something similar for an animation. He animated the seed value so that it was different for every frame. Although each frame was noisy, when played back at 30 fps it looked much better. If anyone remembers who this was or what thread it was, please mention it.

Steve S

One thing you can try if you have Photoshop is to render your animations to .png (as you should anyway) and then set up an “action”, which is a macro, in photoshop to reduce noise on all images in a given folder. It sounds complex, but it really is pretty easy. I’m not sure if Ps can give a better result than After Effects, but I suspect it can once you found the right settings. I use actions when I need to make changes on lots of images all at once quite often. This could probably also be done in Lightroom if you have that. Lightroom is wonderful for making changes to large groups of images all at once.

Someone recently did something similar for an animation. He animated the seed value so that it was different for every frame. Although each frame was noisy, when played back at 30 fps it looked much better. If anyone remembers who this was or what thread it was, please mention it.

That makes sense, as the existing details would be at least partially preserved underneath the noise, which itself would be very difficult for the human eye to perceive since it’s changing 30 (or 24) times per second. I should try that.

One thing you can try if you have Photoshop is to render your animations to .png (as you should anyway) and then set up an “action”, which is a macro, in photoshop to reduce noise on all images in a given folder. It sounds complex, but it really is pretty easy. I’m not sure if Ps can give a better result than After Effects, but I suspect it can once you found the right settings. I use actions when I need to make changes on lots of images all at once quite often. This could probably also be done in Lightroom if you have that. Lightroom is wonderful for making changes to large groups of images all at once.

I think After Effects would be better simply because it’s easier to animate values over time, which could prove useful in that scenario, not to mention that you have tools like neat video at your disposal.

I’ve done something similar in the past with photo stills. Because the noise patterns generated from the camera sensor is random, you can average different frames of the same scene and end up with a clean result. I was doing this while shooting a cityscape where I couldn’t use a single long exposure because of cars getting in the way, so I shot 10 frames at a higher ISO and shorter shutter speed so I could shoot in-between the cars. The averaged result was just as clean as ISO 100. I used Enfuse as a Lightroom Plugin for this operation.

This page discusses the technique: http://wiki.panotools.org/Noise_reduction_with_enfuse

For only 2 frames, the result might be the same as blending them at 50%, but maybe 5 frames at 10 cycles would work better. This is all hypothetical, you should test it. And unfortunately, as far as I can tell, for animation you would need to process each frame individually, so with the added overhead you’re probably better off just leaving your computer render overnight.

Interesting idea, though.

I am trying a different approach for animation. As we speak I am rendering two short videos, one with an animated seed value, the other without. The idea is that the apparent noise will be more difficult to perceive when it is changing very quickly. I’ll post the results shortly.

It will probably look more like film grain which tends to feel more natural. I’m curious about the result, please post back!

Will do.
I loove the look of genuine celluloid film grain. So smooth and silky!
Makes me sad that more movies are being shot digitally these days…

The “right” way to do it is to get the highest quality possible on your source material, and only degrade stuff later in the process.

If you’re talking about movies, I’d say the right way to do it is to simply shoot on film. Probably unfeasible for a low budget production, as film can be harder to work with, but digital noise always looks fake to me.