New real time denoising technique that could revolutionize Cycles Rendering/Viewport

I just saw this video demo on a new paper and was just blown away by the implications it could have on cycles and the viewport in blender:

The paper “Spatiotemporal Variance-Guided Filtering: Real-Time Reconstruction for Path-Traced Global Illumination” is available here:
http://cg.ivd.kit.edu/svgf.php

It’s hard to believe.

This would only be useful if a higher sample count can be used to get a much sharper and detailed result than what is possible with just 1 sample.

See the .pdf examples, various details and sharpness in shadows and textures get lost (especially if it takes more than one sample for said detail to even become noticed).

This has been posted before and I still say it’s very impressive for one sample, but how does it scale up to say, 128 or even 1024 samples?

The denoise works with old frames data and info. in one image probably don’t add a lot of things, except speed.

could be used as a optional better preview in the viewport? The current noisy one is awful and less usable than this.
With this you can even get playback

If you get it in blender we could see how it scales up for final render quality stuff.
For previewing it is obviously waaay better than what we have now :slight_smile:

For previewing, I would be inclined to agree with a possible integration (as it would be far faster and previews aren’t generally seen as needing full quality to begin with).

the result is almost without flickering (of course 1 sample gives some flickering, but who knows what it could be for higher sample rate). If this could reduce flickering after Lukas denoiser :), then animations could have been better.

Lukas’ denoising implementation actually had cross-frame functionality (but had to be cut in order to get an initial implementation in 2.79).

It’s probable that it will return for 2.8 (alongside the other features that initially didn’t make it).

To me this is a very obvious killer feature. I am almost certain that autodesk is already rubbing their hands and drooling all over it - we will probably see it in Maya soon enough too

With this you get a lot of information about the render shot immediately- information that a director can review - about the colors and the lighting - without having to wait for hours for a distracting noise of ugly dots to disappear.

If Cycles gets it , the first impression it has will be very different. Just its viewport would become so much more than it is atm - giving way better feedback to the artist.

Let me also repeat the point about having playback in cycles becoming possible :slight_smile:
Do I even need to sell it to you. The advantages are so obvious

Anyway, stuff like this makes me excited about the future. I really love where blender is going with eevie as well as cycles.

I would like to see this applied to a very low bandwidth video stream.

Animations might be tricky because of flickering - this could in theory be resolved with inter-frame filtering, but there’s no way to implement this in Blender, since every frame is rendered individually :(.

Ace evidently Lukas, at least initially, thought that would be impossible in Blender. But, I sure hope you’re right.

This is the third such algorithm this year, right? At this rate, a couple more will appear by the end of the year. IMHO It is better for the developers of Cycles not to waste their time on this.

Does that thing only work on lifeless scenery with a moving camera ?

no, work with a image sequence, in real time or not. Because work with previous frames

I will never end the wonder, there is no time to enjoy something that you would want more. I am speechless, now I share it on twitter.

you could potentially make a cycles render branch of the blender game engine.

realtime facial rig mocap actors in cycles or in eevee

not tech savvy on this, but from what I see and understand, is it a pretty major game changer and break through not only apply to preview of offline rendering but almost any graphic domain even including game? is it basically a black magic that runs on 1 sample and clean up the whole noise mess with a very decent frame rate, or its not that exciting due to some of its limits, educated me like I’m 30 please.

You know, I could see this being used selectively to different passes of a realtime engine. For example, to smooth out a GI pass or to denoise screen space reflection/refraction in Eevee. I mean think about it, right now a form of raytracing is being used in Eevee for SSR. What if you used less samples and added a denoise stage to that. What if you did a really lowres pass of raytraced area shadows and denoised that? The rest of the image would just be rendered with Eevee’s OGL based engine.

Just a thought.

double post.

It is a step in the right direction, but I wouldn’t be to confident about it, until the see some more implementations and test cases. They use temporal filtering, which works fine for static scenes but often has its problems with animation. For face movements you either need to reject a lot of temporal information (and have only the spatial filtering of the 1 sample data left) or you quickly get artifacts like ghosting.
And I doubt, that it will immediatly find its use in gaming. You still need a bvh for fast raytracing but rebuilding one each frame is to slow. There is probably still some work to be done, until this works for dynamic scenes.
The results are stunning but you have to keep in mind that this is just a academic paper. It often takes some work to get it work in a production environment and they often show the best-case images. But it might be worth some experiments, if this works fine for the cycles viewport.

edit:
I’ve just read the paper. It needs at FullHD 10ms (denoising?) time per frame on a Nvida Titan X. Doesn’t hit game requirements (yet). It produces some artifacts, like blurred sharp reflections and overblurring under movement. It currently doesn’t work, if you render with more than 1spp, you would need to adapt the algorithm to that. it doesn’t work with stochstic primary ray effects, like Depth of Field or Motion Blur. It needs to render a GBuffer (like games do) and therefore struggles with highly detailed model edges (like the leaves of the bushes).