Cycles: Simple way to eliminate fireflies and excessive noise

So I’ve been using Cycles now for quite a few years and have always wondered about this: Why does a single frame always have a fixed noise pattern seed? Many fireflies virtually never disappear in certain circumstances when all samples in a frame are using the same seed. If, for every sample, for even every x samples, the seed is changed, you can basically eliminate, or greatly reduce, fireflies.

A quick way to test this idea is to render a comparison between an image rendered with X samples, compared to the same image rendered many times with different seeds, but with equivalent fewer samples.

In this example, I rendered an image with 300 samples and compared it to 10 images of 30 samples each with different noise seed, which gives you the same 300 samples in total. These images are combined using mean blending between them so each noise pattern has an equal influence. As you can see, the noise is more subdued, more even and the fireflies are basically eliminated.

I wonder, is there a particular reason why Cycles or other path tracers don’t already do this kind of thing internally? Seems like an obvious and very easy fix for an annoying rendering problem.

1 x 300 samples, fixed seed:

10 x 30 samples (300 samples total), 10 different seeds mean combined:

Belgian 3D artist and Blender user Olivier Vandecasteele did a similar test all the way back in 2013 and found that this provided a way to actually get useable caustics in Blender, which normally results in far too many fireflies to be actually useable in any practical way:

http://www.ngon-paradise.com/wiki/doku.php?id=3d:cycles:blender-cycles-noise-paper

Surely there must be a technical reason why renderers like Cycles don’t do this internally/automatically? Seems almost too easy and obvious, but maybe not?

W

I think you are clamping the intermediate images to 0…1 somewhere, by not saving a linear OpenEXR. So this would be similar to using the clamping settings in Cycles (but on 30 samples at a time instead of 1). Clamping causes bias and too dark specular highlights as you can see in the second image.

Right. I rendered out 10x png’s with varying seed values and combined them with a mean filter, so each is clipped to 1.

Here’s the same image with clamp set to 1:


While that’s another way to remove the fireflies, it also sucks the life out of the image. The highlights and reflections are gone.

Compare this to the 10x30 sample + variable seed render, which still retains most of the energy and highlights:

I did notice the energy loss a bit. The elimination of the fireflies does result in energy loss and results in a slightly darker (but clean!) overall image. I also noticed that, if you try and render, say, 100 images with only 1 sample with varying seed, you get a very noticeable amount of energy loss. You need at least a handful of samples, enough to get close to the correct overall exposure.

Maybe it’s the kind of thing that varies too much depending on the scene and circumstance, but it just seems like, from a layman’s perspective, that the very slight hit in correctness is well worth it to get an actual useable image with an order of magnitude fewer samples. Is this sort of thing something other renderers do?

To be fair, this is the same image clamped to 20 direct, 5 indirect:

300 samples:


Gets you pretty much the same thing in this case. Hm. Maybe it really is just the same as clamping then? Maybe it’s not so clever after all

if you factor in 10x the bvh building time, and other intro prep, plus 10x the compositing / file output stage, plus the labor to average the images, you’re probably better off just doubling the samples and doing the whole render in one pass.

and/or use clamp as you have shown.

What you are doing is almost the same as rendering a 3x larger image and then scaling it down. That’s something I know some studios have done, but mostly to reduce aliasing around e.g. very bright area lights. I’d like to improve the Cycles clamping to help with aliasing issues.

But I don’t know of any renderers that do this type of thing to reduce noise, I can’t think of a good reason for why it would be more effective than clamping individual samples.

On rendering ten different images and combining them, wouldn’t it make more sense to have Cycles render to 10 different buffers (each buffer having a tenth of the total number of samples) and displaying a combined version every 10 passes? That way it could perhaps work for tiled rendering and progressive refine rendering. Perhaps to reduce memory overhead, Cycles could just render to 4 buffers (it appears to work well enough for that Ngon Paradise guy).

Another idea, have Cycles do odd/even sampling to two different buffers, then give a combined version (for comparison purposes) to the denoiser so it can detect edges and detail it would otherwise miss, would that be practical?

I can’t get a consistent result (with caustics). I tried with 8x2000 samples, 1x16000 samples, and 1x16000 samples with denoising. The denoise version was definitely cleaner, but the lost detail was obvious. The 8x2000 was better than 1x16000, but honestly, not production quality.

It was a very simple scene, so here are the results. I have to say, I wouldn’t be happy with any.

8x2000 samples:


1x16000 samples:


1x16000 samples with denoising


If you stick to the rule of energy conservation and don´t have any absurdly bright lights you usually don´t have trouble with fire flies:

https://docs.blender.org/manual/en/dev/render/cycles/materials/introduction.html?highlight=energy%20conservation

You can still get fireflies this way as well. They are usually caused by caustics, or specular surfaces reflecting onto diffuse surfaces that are close, or bright light sources reflecting off of specular surfaces onto diffuse surfaces. It happens in nearly all forward path tracers, in some renderers you can change how lights affect specific objects to solve this. In LuxRender, they have a biased path tracer that can resample and eliminate these fireflies.

The examples above remind me of image stacking in low light photography in order to reduce noise, since each image has a different noise pattern, stacking eliminates the noise in favor of a cleaner image.

Well, that’s just a proof of concept. Obviousely it doesn’t make a lot of sense to do it like I did, because like you say there’s a lot of overhead involved. But the renderer could easily change seed every x samples without needing a new bvh building step and file output. My post was meant as a question of why, if it really does make the image cleaner ‘for free’, renderers like Cycles don’t already do this internally.

Exactly. That’s the real world equivalent. In low light photography, you actually also get fireflies sometimes, as well is lots of shadow noise. By combining many static photos together, you bring down the noise floor and eliminate fireflies. Can’t really clamp indirect rays in the real world (or can you, by using polarization filters? :slight_smile: )

Hmm… i believe it’s about preserving light contribution in physically correct representations/simulations. Especially if one does photometric analysis and reports.

From top of my head i know Indigo Renderer & Thea Render support the feature.
From Indigo manual:

Super sample factor

Super sampling helps to eliminate hard edges and fireflies in the render, at the cost of additional memory (RAM).

The amount of additional memory required to store the rendered image is proportional to the square of the super sample factor, i.e. for a factor of 2, 4x more memory is required, and for a factor of 3, 9x more memory is required. Note that this does not affect the size of the final image, and does not affect the rendering speed much (as long as the additional memory required is available).

From Indigo FAQ:

What is supersampling and why does it use so much ram?
When rendering an image, the image is rendered at the specified resolution multiplied by the supersample factor, then reduced for anti-aliasing purposes. The supersampling buffer is also in high dynamic range, so if you are rendering an image of 4000x2000 with a supersampling factor of 4, it will use 384 megabytes of RAM for this buffer.

It’s kind of the opposite, modifying the physical quantities to be get a more visually pleasing image. Reducing aliasing (also known as hard edges) and clamping fireflies (throwing away light that the renderer failed to sample efficiently).

Renderers go through a lot of effort to not use random samples, but rather nicely distributed samples (QMC). Changing the seed every X samples will just make the render more noisy. Unless something is broken, any improvement from averaging images with multiple seeds comes from clamping, not the different seeds.

i normally use G’Mic and the "remove hot pixels " in Gimp
the removed is on the left
this small jpg took about 1 second to run

Attachments


You confuse me :confused:… as you mentioned before:

… rendering a 3x larger image and then scaling it down. That’s something I know some studios have done, but mostly to reduce aliasing around e.g. very bright area lights.
Supersampling is gathering results (rendering) from larger image in the back & mixing it all in to the smaller one. It introduces no technical bias VS clamping or limiting bounces. Are you claiming that scaled down image is worse (has more bias) than if rendered smaller?
Had performed various tests (no clamping used) and no bright pixels were clamped, but merged/averaged to better represent light distribution. Since usually only single bright pixels appear. On the other hand, if clamping is activated, certain (same) light contribution is simply cut off (glass, indirect bounces & all that jazz gets darkened). It’s similar with AA, just down sampling larger image - not ruining the thin red line/ but preserving colors/values, again for more accurate representation.

maybe its a bug, a pixel brightnes should maybe be related to 1/samplecount * colorvalue
so first sample Red
second pixel updates value for 50% (slightly darker red)
next update 25% times (a sudden spike brigh white)
next 12.5 (normal red) … etc
Perhaps not 1/samplecount (or we never get bright white), but width a small correction factor.

Essentially its a statistical certainty, that shouldnt be overwhelemed by one sudden bright spot ?.
Currently a post filter i’m working on might solve this too, but the project isnt ready yet.

Though one might render a lot more samples to get those perfect caustic light reflections.

If you use a smooth downsizing algorithm like cubic, then the supersampling method can yield great results by adding a sort of “anti-aliasing” to each and every sample (meaning a major reduction in fireflies and a noticeably smoother image). It would take longer for each pass to render onto the screen, but it would theoretically save time overall. (Note, this observation is based on actually rendering a super-sized image and then downsizing in a paint program, something I tried to do a few times a long time ago).

Here’s the downside, supersampling can be very memory intensive. You have the overhead of storing much larger buffers to start with, then in Cycles’ case, you would have skyrocketing RAM use whenever you decide to use adaptive dicing. Of course, I would argue that supersampling would probably not be worth having with that drawback in mind, and William’s idea of switching the seed every N samples appears to be different from what that paper showed (which would suggest Cycles rendering to maybe 4 different buffers with 4 seeds and combining them into a result displayed in the render window).