Yeah the Dithered Sobol generated noise pattern holds better detail at lower samples than Sobol. I haven’t tried the Intel denoiser yet but the optix ai denoiser addon by Remington graphics works better with Dithered Sobol than with Sobol and so does the Blender built in denoiser.
@bliblubli I’ve got a handful of guys beta testing my branch Filmer. They’ve all been getting awesome render times and are getting even better times now that I’ve added Dithered Sobol. Thank you very much for everything you’re teaching us in your Blender building course.
You sir are making it so incredibly easy for me to build, add to, tweak and modify Blender source.
The course now has a video to show how to add new libraries on windows (the OpenImageDenoise one in this case) and add the new denoising node. You can get it until Monday at a reduced price. And it’s really made easy, even if you never wrote code before.
To be fair, this one looks very fake. You can see constant size ambient occlusion shadow across the whole scene. Yes, sure it is fast, but it’s not a quality level you can be competitive at unless you want to earn like $5/hour, since everyone these days is capable of doing average level archviz
I know it’s just some random Evermotion scene, but my point is it could be lit in a way which could turn $100 image into $1000 image. But that would probably not end up being just 2 minutes.
The general point I am making is that optimizations stop becoming relevant as soon as they start to introduce easily visible bias that is significantly detrimental to final quality and realism of lighting.
I suppose Evermotion guys use some fakes such as AO, or simplify bounces, and/or invisible fill lights. I’d first turn all the fakes off to truly see how your optimizations perform.
I didn’t notice anything. So I guess how fake it can look highly depends on the expectations of the client. For me and my work, there is no point in overdoing it to make it looks super realistic if it doesn’t have to - usually my own level of quality is set higher than what is expected of me - which leads to wasted time.
What does that have to do with developing rendering technologies? If renderers were developed with the mentality of catering to low budget work, or in other words fastest performance regardless of quality, we all would still be using Blender Internal. Sure, it could not render photorealistic interior, heck, it did not even officially have GI, but boy could you render them fast on today’s hardware…
Here is the outline of the debate.
I argued that if optimizations compromise quality too much, they are not much of use unless you do low quality work.
Counterargument to that is that other people may not notice the quality is poor, and find it sufficient for low end jobs.
My response to that is that’s fine, but it’s not a valid argument for developing a rendering technology. You can do both high end and low end work with renderer capable of producing high end results. But you can’t do high end work with renderer only capable of producing low end results. And of course, if you are willing to sacrifice that much quality for speed, then you are way better off using Eevee anyway.
bliblubli is showing Cycles renders with very impressive rendertimes, but it’s also worth noting that those Cycles renders are set up in a way that compromises quality to such a degree the resulting photorealism is comparable to Eevee. So yes, you get rendertimes close to Eevee, but you also get quality much closer to Eevee.