New real time denoising technique that could revolutionize Cycles Rendering/Viewport

This would only be useful if a higher sample count can be used to get a much sharper and detailed result than what is possible with just 1 sample.

See the .pdf examples, various details and sharpness in shadows and textures get lost (especially if it takes more than one sample for said detail to even become noticed).

This has been posted before and I still say it’s very impressive for one sample, but how does it scale up to say, 128 or even 1024 samples?

The denoise works with old frames data and info. in one image probably don’t add a lot of things, except speed.

could be used as a optional better preview in the viewport? The current noisy one is awful and less usable than this.
With this you can even get playback

If you get it in blender we could see how it scales up for final render quality stuff.
For previewing it is obviously waaay better than what we have now :slight_smile:

For previewing, I would be inclined to agree with a possible integration (as it would be far faster and previews aren’t generally seen as needing full quality to begin with).

the result is almost without flickering (of course 1 sample gives some flickering, but who knows what it could be for higher sample rate). If this could reduce flickering after Lukas denoiser :), then animations could have been better.

Lukas’ denoising implementation actually had cross-frame functionality (but had to be cut in order to get an initial implementation in 2.79).

It’s probable that it will return for 2.8 (alongside the other features that initially didn’t make it).

To me this is a very obvious killer feature. I am almost certain that autodesk is already rubbing their hands and drooling all over it - we will probably see it in Maya soon enough too

With this you get a lot of information about the render shot immediately- information that a director can review - about the colors and the lighting - without having to wait for hours for a distracting noise of ugly dots to disappear.

If Cycles gets it , the first impression it has will be very different. Just its viewport would become so much more than it is atm - giving way better feedback to the artist.

Let me also repeat the point about having playback in cycles becoming possible :slight_smile:
Do I even need to sell it to you. The advantages are so obvious

Anyway, stuff like this makes me excited about the future. I really love where blender is going with eevie as well as cycles.

I would like to see this applied to a very low bandwidth video stream.

Animations might be tricky because of flickering - this could in theory be resolved with inter-frame filtering, but there’s no way to implement this in Blender, since every frame is rendered individually :(.

Ace evidently Lukas, at least initially, thought that would be impossible in Blender. But, I sure hope you’re right.

This is the third such algorithm this year, right? At this rate, a couple more will appear by the end of the year. IMHO It is better for the developers of Cycles not to waste their time on this.

Does that thing only work on lifeless scenery with a moving camera ?

no, work with a image sequence, in real time or not. Because work with previous frames

I will never end the wonder, there is no time to enjoy something that you would want more. I am speechless, now I share it on twitter.

you could potentially make a cycles render branch of the blender game engine.

realtime facial rig mocap actors in cycles or in eevee

not tech savvy on this, but from what I see and understand, is it a pretty major game changer and break through not only apply to preview of offline rendering but almost any graphic domain even including game? is it basically a black magic that runs on 1 sample and clean up the whole noise mess with a very decent frame rate, or its not that exciting due to some of its limits, educated me like I’m 30 please.

You know, I could see this being used selectively to different passes of a realtime engine. For example, to smooth out a GI pass or to denoise screen space reflection/refraction in Eevee. I mean think about it, right now a form of raytracing is being used in Eevee for SSR. What if you used less samples and added a denoise stage to that. What if you did a really lowres pass of raytraced area shadows and denoised that? The rest of the image would just be rendered with Eevee’s OGL based engine.

Just a thought.

double post.

It is a step in the right direction, but I wouldn’t be to confident about it, until the see some more implementations and test cases. They use temporal filtering, which works fine for static scenes but often has its problems with animation. For face movements you either need to reject a lot of temporal information (and have only the spatial filtering of the 1 sample data left) or you quickly get artifacts like ghosting.
And I doubt, that it will immediatly find its use in gaming. You still need a bvh for fast raytracing but rebuilding one each frame is to slow. There is probably still some work to be done, until this works for dynamic scenes.
The results are stunning but you have to keep in mind that this is just a academic paper. It often takes some work to get it work in a production environment and they often show the best-case images. But it might be worth some experiments, if this works fine for the cycles viewport.

edit:
I’ve just read the paper. It needs at FullHD 10ms (denoising?) time per frame on a Nvida Titan X. Doesn’t hit game requirements (yet). It produces some artifacts, like blurred sharp reflections and overblurring under movement. It currently doesn’t work, if you render with more than 1spp, you would need to adapt the algorithm to that. it doesn’t work with stochstic primary ray effects, like Depth of Field or Motion Blur. It needs to render a GBuffer (like games do) and therefore struggles with highly detailed model edges (like the leaves of the bushes).

It is a major break through, that’s for sure. The practical benefits are limited at this point, but there is a lot of future potential.
For photorealistic renderings, it doesn’t reach the quality you can get with lots of samples and denoising applied. This makes it unsuitable for final renders, but still a huge time safer for previews.
My impression for games is, that it is almost there regarding the denoising quality. The videos were quite short, but very impressive when compared to existing techniques. However, I had the impression that sometimes the immersion was broken.

Besides improving this technique, it might be interesting to see how well it performs when it is adjusted to work with higher sample rates.
So far the performance bottleneck of path tracers like Cycles was clearly the actual path tracing. The preparation steps take quite some time too, but were almost insignificant so far. With this kind of denoising, those preparation steps are very likely going to get more attention.

I Posted about this and another technique that didint look as good at 1 spp but was designed to scale up with samples correctly that was done by Morgan Mcguire and team (they also released code).

This technique is good, It was done by Chris Wyman and team. I spoke to Chris about a week or so back and although he would like to release the code at some point it most definatly will not be soon. He explained that as always student code can be messy and would be needed to be cleaned up heavily, and that IF they ever release it. There is no roadmap to do this, Just Chris’s good intentions to do so, But at the end of the day he’s a Nvidia researcher, and we all know what Nvidia are like when it comes to shareing things :slight_smile:

Now to the main point, This paper shown in the 2 minutes video is not even the really impressive one, This one as shown on the Nvidia reaserch page does a Much better job at keeping reflection and refraction detail also at 1 SPP, it’s basicly the other tech plus with Machine AI slapped in the mix.

Here’s the page:
http://research.nvidia.com/publication/interactive-reconstruction-monte-carlo-image-sequences-using-recurrent-denoising

Video:

But dont get your hopes up people, Not only is there no real detail in Paper form of how this is done, But it’s also Nvidia. And Nvidia dont like to share their toys with the other kids. If we ever do get it, more than likely will be on a license basis from Nvidia like VXGI or game works. Shame Nvidia haven’t learnt like AMD have that Open source is best for Devs and End users, and that expensive license deals and tech that only works on THEIR hardware doesn’t do anyone any good.

Hopefully one day their learn.