Yeah, but my persistence paid off. My point being, I am not against new Cycles features, quite the contrary actually. It’s just that I agree with Lukas that scrambling is just not a good direction any renderer should go. There are a lot better, and more accurate ways to speed up path tracing rendering, such as caching secondary GI bounces. I believe you that you’ve been able to pull quite a few shots off with it, but it’s still not a solution one can truly rely on in every scene, and it’s a solution which requires user to tweak somewhat cryptic parameters which balance bias against rendering speed.
It’s fine if such a thing is in some 3rd party build, but official builds needs to consider usability too.
Its a 0-1 slider idk if you can count that as cryptic. Official builds need to consider being replaced by actual software that do what the user needs… Like Redshift and Maya.
Even if we save 5000 dollars a seat for blender render costs are more
Scrambling reduces the load on GPU significantly so your desktop and blender run smooth while you are previewing renders
So that means no more windows driver timeouts and a more stable blender
At this rate you cant even compare what theory does to daily blender builds anymore since its almost impossible to get our results with stock blender
Yeah, I’d also be interested about what else does Theory build offer in terms of rendering
Regarding the cryptic statement. One slider is easy, but the way it impacts the scene and the way its values relate to light transport stability may come out a bit cryptic.
In my very humble and unprofessional few tests with those two, I think they are quite safe in many scenes. Scrambling Distance in low values makes things really weird, but at high sample rates it magically converges to stable images. It would be nice to have builds that include those two features, but I also understand arguments from developers.
Hum, if you share your blend file (online) people wouldn’t know they need to activate the feature, they’ll think there is a bug.
Also, making an experimental mode ( command line, python activated ect… ) where there are great additions may force people to always activate it by default “just in case” …
It’s a great idea but IMO it’s will just work on a short term basis, like with experimental features that gets non-experimental when they are polished.
CPU+GPU is so slow for optimized cycles scenes anyways ESPECIALLY if you are using denoising
GTX 1080 + GTX 1070ti + Threadripper 1950x I hardly ever save over 2 seconds on a render
The Threadripper can keep up with the 1080 when its rendering by its self
But adding it on top of the 2 GPUs doesnt 1/3rd the render time it saves 0 - 6 seconds on ~2 - 5 minute renders
Sometimes if its something GPU is really bad at you will see a speed up but I try to keep our scenes super lightweight
Brecht “Fixed” small tiles on GPU a few months ago but now those small tiles are slower again 256x256 is still preferable on GPU and 64x64 - 16x16 on CPU
I’d just like to chime in as an inexperienced artist who is very interested in the aesthetic and workflow implications of the scrambling patch. I genuinely don’t understand why it can’t be included and classified as experimental - even if it lives there forever and there is no plan to develop it to a point of stability. I bow down before the devs but simultaneously question their ability to fully comprehend every use case… its just impossible. All these creative implications are being mapped out and explored by the artists in real time. It would really be wonderful if the devs betrayed their gut instincts on this one - it could fundamentally broaden the impact that Blender can have.
What if I’m bad at optimizing scenes? I’m stunned that you’re able to get away with 256 samples per scene. Is that interiors or exteriors? When I rendered an interior archviz animation a few weeks ago which I rendered on multiple PCs I saw a difference of about 40mins per frame depending on whether I could use the GPU in addition to the CPU. But like I said I’m bad at optimizing scenes. It’s one of the things I need to work on.
Is it something that needs to be maintained, though? Or is it more like revealing a variable in Cycles parameters that will always exist, henceforth? I thought it seemed like the latter. If its not I understand better but I do think the positive creative implications would be enough to at least allow it for now, especially if it was something as simple as allowing the implementation being used at a major production studio. I do confess to complete ignorance - I just pray the developers also understand where their awareness drops off and not just hold in their mind the singular goal of photorealistic traditional renders. Even there though it seems that the fact that this technique is being used the most in some of Blender’s most high profile public uses would validate its traditional utility as well? It just seems like it is worth it.