Noise removal For Cycles (This paper has code to build a good implementation)

Hi Guys,

I Hope someone can point this paper in the Blender Cycles dev’s direction. It looks very very good for Noise removal from QMC path tracing.

Link:
http://agl.unm.edu/rpf/index.php

The pdf is the tech report that iclused Code.

Moved from “Latest News” to “Blender and CG Discussions”

How does it work with papers like these, copyright and blender being open source? Are everyone free to use innovations like these in their render engines?

EDIT: Never mind, I found it myself.

The ideas presented in this paper are available for commercial licensing through the UNM technology transfer office STC.UNM.

Looks like it won’t make its way to Cycles then :frowning: Darn, if this could have been integrated in cycles, my life would have been changed.

Brecht is certainly aware of these (and other) methods of noise removal, so at some point something like this might get implemented.

My suspicion with Random Parameter Filtering (and certain other sample-space techniques) however is that it scales badly with the amount of samples. It does an arguably impressive job of recovering the image from only 8 samples, but the quality still leaves something to be desired, while reconstruction takes longer than the rendering itself. If their algorithm scaled well, they’d have shown pictures that rival the quality of ground truth at a significantly lower render time.

Quote:

We begin by presenting the pseudocode for our entire algorithm,
which is divided into four parts. Algorithm 1 shows the overall algorithm
and explains how each of the individual pieces fit together.
Algorithm 2 explains how we preprocess the samples by clustering
them and removing their mean and standard deviation (Sec. 4). Algorithm
3 shows how we statistically compute the feature weights
for our cross-bilateral filter using the statistical dependency of features
on the random variables (Sec. 5). Finally, Algorithm 4 shows
how to filter the samples (Sec. 6). The sections that follow go into
sufficient detail to make it possible for interested readers to implement
the RPF algorithm. The notation we shall use in this technical
report is similar to that of the main paper and is listed in Table I.

In this technical report, we have described the details necessary
to implement our random parameter filtering algorithm, which removes
the noise in Monte Carlo rendering by estimating the functional
dependency between sample values and the random parameters
used to compute them. This implementation is only one exploration
of the proposed technique, and we are hopeful that others
will build on this work and improve the quality of the results.

If not able to use directly (someone needs to contact these peeps and find out one way or another) the system is fully broke down,as as the guy said there are simular ways to do this that would get rounf the copyright issue i beleive.

I assumed that the quality would be better with the more image that’s rendered, as cycles can get 99% of the image rendered but just noise affected i thought the outcome would be more accurate, could you explain to me why that wouldnt be the case. Cheers

This paper has been out for years and has been discussed several times on this forum. Anyone who is able to write a modern renderer would be aware of these papers, issues, and techniques. Kinda like sending a facebook meme of a “medical breakthrough” to your doctor…

lol, Oi Oi watch the attitude lol. If it had been talked of before i didnt catch it :slight_smile:

I will interpret this sentence as follows:
“Can I not just leave Cycles rendering as long as I want and then have Random Parameter Filtering clear up the remaining noise?”

No, you cannot. By rendering longer (i.e more and more samples), you provide larger and larger input to the Random Parameter Filtering algorithm. This will affect its runtime and memory requirements. I haven’t examined the complexity of the presented algorithm, but considering that they only demonstrated it with 8 samples (which is very low) it probably doesn’t scale very well. It might be that at (for example) 16 samples it takes 1000x longer to process, at which point there wouldn’t be any more benefit to it.

Which is not to say the algorithm is useless, but it isn’t the holy grail either - otherwise everybody would already be using it.

@Zalamander, Thanks for the explaination. Ive got my fingers crossed that it could be used for higher samples, the guy said it only takes about 2 mins on the highres images to post pro with the work being done on the cpu. Could you have a look at the paper and try to exavluate if the system could be scales up. Cheers

I would like to add that in my eyes an optimisation like this (it seems like there are many papers around that achieves basically the same thing, like this one which seems to be free to use: http://www1.cs.columbia.edu/~rso2102/AWR/) should be prioratized higher than new functions. Sure, I would LOVE SSS, volumetrics, you name it, but truth is that all of these functions is just adding an extra touch to your image. Real noise removal is on the other hand the difference between actually getting an usable image or not.

I know what I find more important :slight_smile:

See here for some explanation:
http://www.mail-archive.com/[email protected]/msg12987.html

I think it could be great for previews, but personally I don’t think this paper will scale up. Not because of performance issues but because it does not have a reliable error metric to decide how much you can interpolate and where you need more samples.

To put it a bit simplistically: this method will introduce artifacts in the render. For few samples having those is a good tradeoff, but if you take more samples those artifacts will not go away.

@Brecht, Well your the lad to tell us this stuff (the master). While your here what are your own thoughts and ideas for a solution, id love to hear your opinions and perceptions on where cycles noise reduction is going. Cheers for all the great work you do.

(An answer from the master himself! :O)

Okay, thanks for the simple explanation! Is this true for all noise removing methods or just this one?

Because I have seen many around (like the one I linked in my post above), and I would love if at least one a these worked well! :slight_smile: (I am dreaming of doing animations in Cycles)

There’s many things we can investigate still for reducing noise: adaptive sampling, better importance sampling, better sampling patterns, bidirectional lighting, … . Then there are algorithms to speed up diffuse indirect light like irradiance caching or multiresolution radiosity caching, though they don’t help for glossy reflections of course.

I don’t believe there is a magic algorithm that will e.g. reduce render time 10x in a full render with all kinds of different effects going on. It’s just hard work implementing various different optimizations.

They all suffer from being unreliable in some way. It’s a different sort of approach, how can we make this noisy image look as good as possible vs. how can we render this clean image faster? You end up with different algorithms.

These algorithms are less usable for animations than for still images, because they have lots of low frequency noise (‘blotches’), and those flicker in animation. If you look at the high resolution video from the RPF paper, there’s a lot of flickering going on, which looks like compression problems on the youtube video but actually isn’t. If you’ve got textures applied in still images it can look ok because it’s not always clear what is noise and what is a texture.

So as of now, is there really no noise-reduction paper out there that would allow you to reduce the strength of the filtering, because if they don’t, someone should create an algorithm like that you can use to give a final ‘push’ to remove grain once the image is to the point where it is well-sampled, but not fully converged.

But would developing a modified algorithm to do that even be feasible, though I’m in full agreement with trying to reduce noise through optimization and more advanced sampling algorithms first, so keep up the good work.

Oh, okay! I only looked at the images. Well, if it doesn’t work good for animations it isn’t really needed for final render. At least I think it is okay to wait an hour for a really good looking still image. But with animation, hours becomes months, hehe :smiley:

Well, too bad, but it is hard to stop dreaming of fantistic shortcuts. Guess I will just have to save some money and build a render farm like I have planned! :slight_smile: Thanks for the information, and also (of course) thanks for all the fantastic work you do with Cycles and Blender!

Ok, Im starting to get a broader understanding. As ive used cycles to render interiors for example even at 10,000 samples (12-13) Hours a frame noise can still be within the image. We now also know that an 8 spp render by path tracer can be kinda acceptable but not really, but fast (as a post pro). You also have work with spherical harmonics and by directional path tracing within rasterisers (like the work by Yusuke Tokuyoshi and Shinji Ogak) for real time applications. What are your thoughts about merging these solutions, My understanding with Luminous engine is there using baked maps as a pre process to produce much higher ray equivalant bi directional path tracing for realtime eval, but couldnt such a system still be used in high res to smooth final output on a non realtime render in cycles (also would help me as this is where my pasion is, Realtime). Screen space voxelisation for AO, many many things that could converge.

Creating a filter that can remove noise fairly accurately from a still is not an issue. It’s creating the same filter that works temporally (between multiple frames over time) that is an issue. There’s so much information to crunch through that, at least with modern techniques, the time savings from dropping samples are more than eliminated by either computation time for the filtering algorithm, or by the incoherent nature of the filter between frames. But really, the proof is in the pudding. There are a number of path tracers in use RIGHT NOW in production environments where a whole R&D team is paid hundreds of thousands, or millions of dollars to come up with ways to speed up rendering. None of them use these kinds of filters in production because of the quality costs. The day that one of these is viable for production animation, you can be sure that you’ll see it pop up in a commercial path tracer, and some time later in Cycles. As it stands, brute force path tracing, per light samples, lighting tricks, improved importance sampling, and just recently bidirectional sampling are the only battle-proven methods for shortening render times for animation.

@M9105826, Yep thats why im thinking starting to intergrate some of the new realtime methods could be the way forward (for example Luminous engine and the way in uses pre processing for bi directional path tracing. There videos of a real time game engine have no such animation issues in there videos even realtime). As GPU based alone path tracing is becoming more and more an issue, we cant split process between cpu and GPU becuase the entire scene would have to be in main mem and graphics mem, so taking the best parts of a realtime Bi directional path tracer using baked maps would atleast give us clean results and aniamtion clean results. Baked ray bundles could really be a help here, rather than trying to process a half finished image, bake much higher lighting info in the pre process which can be used in the full render.