the result is almost without flickering (of course 1 sample gives some flickering, but who knows what it could be for higher sample rate). If this could reduce flickering after Lukas denoiser :), then animations could have been better.
To me this is a very obvious killer feature. I am almost certain that autodesk is already rubbing their hands and drooling all over it - we will probably see it in Maya soon enough too
With this you get a lot of information about the render shot immediately- information that a director can review - about the colors and the lighting - without having to wait for hours for a distracting noise of ugly dots to disappear.
If Cycles gets it , the first impression it has will be very different. Just its viewport would become so much more than it is atm - giving way better feedback to the artist.
Let me also repeat the point about having playback in cycles becoming possible
Do I even need to sell it to you. The advantages are so obvious
Anyway, stuff like this makes me excited about the future. I really love where blender is going with eevie as well as cycles.
Animations might be tricky because of flickering - this could in theory be resolved with inter-frame filtering, but there’s no way to implement this in Blender, since every frame is rendered individually :(.
Ace evidently Lukas, at least initially, thought that would be impossible in Blender. But, I sure hope you’re right.
not tech savvy on this, but from what I see and understand, is it a pretty major game changer and break through not only apply to preview of offline rendering but almost any graphic domain even including game? is it basically a black magic that runs on 1 sample and clean up the whole noise mess with a very decent frame rate, or its not that exciting due to some of its limits, educated me like I’m 30 please.
You know, I could see this being used selectively to different passes of a realtime engine. For example, to smooth out a GI pass or to denoise screen space reflection/refraction in Eevee. I mean think about it, right now a form of raytracing is being used in Eevee for SSR. What if you used less samples and added a denoise stage to that. What if you did a really lowres pass of raytraced area shadows and denoised that? The rest of the image would just be rendered with Eevee’s OGL based engine.
It is a step in the right direction, but I wouldn’t be to confident about it, until the see some more implementations and test cases. They use temporal filtering, which works fine for static scenes but often has its problems with animation. For face movements you either need to reject a lot of temporal information (and have only the spatial filtering of the 1 sample data left) or you quickly get artifacts like ghosting.
And I doubt, that it will immediatly find its use in gaming. You still need a bvh for fast raytracing but rebuilding one each frame is to slow. There is probably still some work to be done, until this works for dynamic scenes.
The results are stunning but you have to keep in mind that this is just a academic paper. It often takes some work to get it work in a production environment and they often show the best-case images. But it might be worth some experiments, if this works fine for the cycles viewport.
I’ve just read the paper. It needs at FullHD 10ms (denoising?) time per frame on a Nvida Titan X. Doesn’t hit game requirements (yet). It produces some artifacts, like blurred sharp reflections and overblurring under movement. It currently doesn’t work, if you render with more than 1spp, you would need to adapt the algorithm to that. it doesn’t work with stochstic primary ray effects, like Depth of Field or Motion Blur. It needs to render a GBuffer (like games do) and therefore struggles with highly detailed model edges (like the leaves of the bushes).
It is a major break through, that’s for sure. The practical benefits are limited at this point, but there is a lot of future potential.
For photorealistic renderings, it doesn’t reach the quality you can get with lots of samples and denoising applied. This makes it unsuitable for final renders, but still a huge time safer for previews.
My impression for games is, that it is almost there regarding the denoising quality. The videos were quite short, but very impressive when compared to existing techniques. However, I had the impression that sometimes the immersion was broken.
Besides improving this technique, it might be interesting to see how well it performs when it is adjusted to work with higher sample rates.
So far the performance bottleneck of path tracers like Cycles was clearly the actual path tracing. The preparation steps take quite some time too, but were almost insignificant so far. With this kind of denoising, those preparation steps are very likely going to get more attention.
I Posted about this and another technique that didint look as good at 1 spp but was designed to scale up with samples correctly that was done by Morgan Mcguire and team (they also released code).
This technique is good, It was done by Chris Wyman and team. I spoke to Chris about a week or so back and although he would like to release the code at some point it most definatly will not be soon. He explained that as always student code can be messy and would be needed to be cleaned up heavily, and that IF they ever release it. There is no roadmap to do this, Just Chris’s good intentions to do so, But at the end of the day he’s a Nvidia researcher, and we all know what Nvidia are like when it comes to shareing things
Now to the main point, This paper shown in the 2 minutes video is not even the really impressive one, This one as shown on the Nvidia reaserch page does a Much better job at keeping reflection and refraction detail also at 1 SPP, it’s basicly the other tech plus with Machine AI slapped in the mix.
But dont get your hopes up people, Not only is there no real detail in Paper form of how this is done, But it’s also Nvidia. And Nvidia dont like to share their toys with the other kids. If we ever do get it, more than likely will be on a license basis from Nvidia like VXGI or game works. Shame Nvidia haven’t learnt like AMD have that Open source is best for Devs and End users, and that expensive license deals and tech that only works on THEIR hardware doesn’t do anyone any good.