for those asking about OIDN 1.3
It’s in the latest win 2.93 alpha, check it out!
it’s now much better at keeping the details according to my test
for those asking about OIDN 1.3
Congrats to the R&D team at Intel, the denoised result no longer appears to be objectively worse in moderately to heavily sampled areas (compared to the noisy image), even in areas of smaller details.
At least for the spots where albedo and normal data is available, I am not sure how well it advanced on specular highlights yet. Either way, it is becoming clear that the BF made the right call on not placing much more focus on the old NLM denoiser.
Does 2.93 break a lot of addons due to the new Python switch?
I’ve got a scene that mysteriously fails to denoise sensibly in one very odd area if I have adaptive samples enabled. Looking forward to seeing if an updated OIDN works better.
i have noticed this too. with adaptive sampling it sometimes has problems.
Like many denoisers, OIDN works best when you have the sampling pretty even. Being aggressive with adaptive sampling won’t work well because the samples can cluster, especially in darker and in areas prone to fireflies (you can verify this by enabling the
sample count debug pass).
Any chance of quality as a result relies on you sticking to a light application of adaptive sampling (ie. high min samples, low threshold, and a max sample value that will often be reached). Theoretically, since OIDN allows custom training sets now, the BF can make it work better by feeding it adaptively sampled scenes, but I am not sure how that would impact the results of scenes with no adaptive sampling.
Afaik Oidn is trained on Sobol pattern noise, while using Adaptive Blelder silently switch to Progressive Multi-jitter. I bet this is the key. If as Ace reports Oidn 1.3 works with othe training sets, probably in the future we could have Cycles to silently adapt (given the proper trainings are done)
A heavily sampled scene in Cycles will show little difference regardless of what sampler was used (as all three are advanced enough in distribution). The only time you would see a difference is in things that are hard for Cycles to sample like caustics, but OIDN seems to be at a state where it doesn’t break down as soon as you use a different sampler (especially when you note that it can scale up well unlike Optix).
The main things that would help improve results further would be many-light sampling and some form of manifold exploration, which coincidentally are the two major optimization subjects that are not in Cycles yet. Brecht is supposedly going to resume work on the engine, so hopefully one or both of these get into master this year.
I have seen this and it didn’t really bother me. I knew it was dark so didn’t really expect a miracle. My problem file though has issues on the best lit area of a counter top, either side of the problem area has much less light and they have no issues with denoising. That particular area also had no fireflies in the noisy version of the image.
With Brecht back on Cycles, patches, speedups, and other changes are now incoming.
A spread option for area lights
AO bounce settings more accessible
(A big one) Persistent Data
Wonder what does Fast GI mean…
EDIT: oh, that’s just the simplify bs… oh well…
People who want to do a quick test, “Classroom” Cycles sample scene is good for it.
It has a basic camera animation there. “Persistent Data” is under Render tab > Performance > Final Render. Timing render animation in a frame interval without and with Persistent data. You don’t need to set high render samples for testing.
Is there a way to enable persistent data for viewport too? Would be nice to be able to have only updates reloaded everytime you want to start viewport rendering.
from the commit – “For Eevee and Workbench this option is not available, however these engines
will now always reuse the depsgraph for animation and multiple view layers. This can significantly speed up rendering.”
If you want to actually see how much it actually saves… 1sample 1% resolution would be ideal. It should not impact the raytracing time / post processing time, just the pre-processing time.
The first time you render and the second time you render, will see a significant improvement.
I meant cycles viewport rendering.
Position for Rendering Software Engineer disappears, so it begins!
“Who’s that girl?” (or guy)
There was that one girl named Mai who we know can write some really good quality code (she made the Cycles microdisplacement usable). The new hire may or may not be a past contributor though.
How come nobody fancies this? This is kinda huge to me. Finally able to control lights without having to model shades or rely on spots where you can’t control the shape. Two steps missing still; camera visible lights to reduce the unnecessary clutter on light fixtures, and some way to read the objects/bounding box size (dimensions) in materials (I’d use it to normalize constant LED patterning for light strips on various lengths, but this is less important than camera visible lights).