Cycles Development Updates

Yeah, but my persistence paid off. My point being, I am not against new Cycles features, quite the contrary actually. It’s just that I agree with Lukas that scrambling is just not a good direction any renderer should go. There are a lot better, and more accurate ways to speed up path tracing rendering, such as caching secondary GI bounces. I believe you that you’ve been able to pull quite a few shots off with it, but it’s still not a solution one can truly rely on in every scene, and it’s a solution which requires user to tweak somewhat cryptic parameters which balance bias against rendering speed.

It’s fine if such a thing is in some 3rd party build, but official builds needs to consider usability too.

Its a 0-1 slider idk if you can count that as cryptic. Official builds need to consider being replaced by actual software that do what the user needs… Like Redshift and Maya.

Even if we save 5000 dollars a seat for blender render costs are more

Scrambling reduces the load on GPU significantly so your desktop and blender run smooth while you are previewing renders

So that means no more windows driver timeouts and a more stable blender :stuck_out_tongue:

At this rate you cant even compare what theory does to daily blender builds anymore since its almost impossible to get our results with stock blender

2 Likes

@LordOdin. ¿Scrambling Distance and what other feature that is not yet included in master you are using in the Studio for better render times?

Yeah, I’d also be interested about what else does Theory build offer in terms of rendering :slight_smile:

Regarding the cryptic statement. One slider is easy, but the way it impacts the scene and the way its values relate to light transport stability may come out a bit cryptic.

Scrambling Distance and Dithered Sobol are enough to half the samples on most scenes.

Persistent Data is a huge time saver when you dont have animated mesh. compared to minutes of bvh building the render just starts instantly

We have a lot of in house tools to manage assets and things like that… We had thousands of cars all instanced with motion on top of that was tricky

The opening shot in this is entirely cgi… With post of course

5 Likes

Do you do motion blur in camera or in post?

In post. Baking it in to the render is a bad idea

1 Like

In my very humble and unprofessional few tests with those two, I think they are quite safe in many scenes. Scrambling Distance in low values makes things really weird, but at high sample rates it magically converges to stable images. It would be nice to have builds that include those two features, but I also understand arguments from developers.

By the way, is not the AO bounces trick reducing much more render time compared to Scrambling Distance?
https://blenderartists.org/t/brechts-easter-egg-surprise-modernizing-shading-and-rendering/505373/16479
https://blenderartists.org/t/brechts-easter-egg-surprise-modernizing-shading-and-rendering/505373/16488

2 Likes

Simplify AO does help significantly in some cases

1 Like

So are there any new builds available? I found a Theory build but it was from November and was missing features like CPU/GPU hybrid rendering.

Hum, if you share your blend file (online) people wouldn’t know they need to activate the feature, they’ll think there is a bug.
Also, making an experimental mode ( command line, python activated ect… ) where there are great additions may force people to always activate it by default “just in case” …
It’s a great idea but IMO it’s will just work on a short term basis, like with experimental features that gets non-experimental when they are polished.

Not currently

CPU+GPU is so slow for optimized cycles scenes anyways ESPECIALLY if you are using denoising

GTX 1080 + GTX 1070ti + Threadripper 1950x I hardly ever save over 2 seconds on a render

The Threadripper can keep up with the 1080 when its rendering by its self

But adding it on top of the 2 GPUs doesnt 1/3rd the render time it saves 0 - 6 seconds on ~2 - 5 minute renders

Sometimes if its something GPU is really bad at you will see a speed up but I try to keep our scenes super lightweight

Brecht “Fixed” small tiles on GPU a few months ago but now those small tiles are slower again 256x256 is still preferable on GPU and 64x64 - 16x16 on CPU

I’d just like to chime in as an inexperienced artist who is very interested in the aesthetic and workflow implications of the scrambling patch. I genuinely don’t understand why it can’t be included and classified as experimental - even if it lives there forever and there is no plan to develop it to a point of stability. I bow down before the devs but simultaneously question their ability to fully comprehend every use case… its just impossible. All these creative implications are being mapped out and explored by the artists in real time. It would really be wonderful if the devs betrayed their gut instincts on this one - it could fundamentally broaden the impact that Blender can have.

1 Like

Well, mostly becuse if they include it they have to:

a) deal with dumb bug reports,
b) maintain the code,
and/or
c) rip it out later and annoy people

What if I’m bad at optimizing scenes? I’m stunned that you’re able to get away with 256 samples per scene. Is that interiors or exteriors? When I rendered an interior archviz animation a few weeks ago which I rendered on multiple PCs I saw a difference of about 40mins per frame depending on whether I could use the GPU in addition to the CPU. But like I said I’m bad at optimizing scenes. It’s one of the things I need to work on.

Is it something that needs to be maintained, though? Or is it more like revealing a variable in Cycles parameters that will always exist, henceforth? I thought it seemed like the latter. If its not I understand better but I do think the positive creative implications would be enough to at least allow it for now, especially if it was something as simple as allowing the implementation being used at a major production studio. I do confess to complete ignorance - I just pray the developers also understand where their awareness drops off and not just hold in their mind the singular goal of photorealistic traditional renders. Even there though it seems that the fact that this technique is being used the most in some of Blender’s most high profile public uses would validate its traditional utility as well? It just seems like it is worth it.

Might be because you have a lower tier GPU and a decent CPU? what are your specs

It was on two of the work computers I use. One has dual Xeons and a GTX 1070, the other I believe just has dual Xeons.

Dual xeons can be slower than a 8 thread modern i7 gotta tell me which xeons specifically

I don’t know, but I believe they were older ones bought at lower cost.