@shteeve as a confirmation of what @DeepBlender said, here is the code if you want to read it yourself.
{
/* Devices that can get their tiles stolen don't steal tiles themselves.
* Additionally, if there are no stealable tiles in flight, give up here. */
if (tile_device->info.type == DEVICE_CPU || stealable_tiles == 0) {
return false;
}
Double nope here: first you would lose time rendering the low resolution, second there might be tiny diffucult areas that make a tile ādifficultā that might disapper in a lower resolution.
I havenāt tried it directly, but have been following the thread with a microscope, there are definite improvements in almost every render, more noticeable in scenes with strongly tinted lights, and you have to define the color of your bsdfs in absorption terms, they have made a new spectrum curves nodes just for this itās wonderful. I seriously dig this approach as this seems the natural evolution of rendering in general, I mean getting closer and closer to light physics. And the images look SO much more natural itās stunning I think. Same diffraction effect that you get with chroma abberration node, but accurateā¦ hence believable !
As going from a lower resolution to a higher one results in the difficult areas scaled in proportion to the image, they donāt represent added difficulty in terms of the number of rays they require to converge, I mean you can just multiply the rendertime by four if you multiply the pixel number by four (not including microdicing which being done in pixel space is probably adding some more time to the total)
What I mean is the difficult areas donāt disappear they just scale down, unless when you get to a scale where monte carlo sampling is not representative anymore, that means an object being under one pixel big and gets <totalsamples, but I guess this is possible with a high frequency material or very small objects scattered with very different materials that would pass the ~1px threshold when scaling up the render resolution.
I meant, depending of the scale (how much smaller the preview would be? not too big i guess) details can totally disappear
As in my old RCS, i think the most versatile solution would be to have a āfirst passā of very low sampling (one? ten? a percentage of total?). So you would have a relatively quick preview (useful also to check trivial mistakes like forgotten objects around, turned off lights, collections etcā¦), Cycles could evaluate how difficult each tile is, and also could exactly estimate the final rendertime. Not only this: Cycles is actually able to not discard the data of those first samples, and to continue the calculations from there!
The idea could be expanded like, having an animation saved every X samples, to use it for prelminar video editing
Yeah youāre right I actually ended up here in my rambling above. High frequency details have gotta be the right size to get squashed in the smaller image but thatās quite likely, glints etc
Yeah, spectral wavelength rendering was introduced by Maxwell Render in the mid-2000s, and Maxwell is still one of the most realistic renderers Iāve ever used.
Having said that, I donāt want to lift spectral rendering to a holy grail, as Iāve recently tried Octane, which is a full spectral renderer, and I noticed no significant visual advantages compared to LuxCoreRender, which is an RGB spectrum renderer.
I guess spectral advantages become particularly visible when using specific effects, such as dispersion.
I donāt think theyāve yet made the necessary changes to closures so that they do diffraction (or thin-film interferenceā¦). However seeing materials react to tinted lights was enough to convince me. You do good to mention thereās more to it ! Anything immerged into a volume (a body of water say) will also change naturally according to the specific absorption of that medium -there are some āunderwaterā images in the other thread you can check out !
For what it matters, my RCS got only 6 votes, and I canāt wrap my head aorund enough to understand why! It seems so easy and effective to me, maybe I just didnāt translate well enough in plain english? What would be the cons?
The tiles rendering speed are not the same.IE you have a Volumetric tile part that takes loger vs a black backround tile with nothing to calc heavy.
The prob here is that you need to sample all tiles with 10 samples for the calculation,before.That takes time for initiation,before further tile rendering.
Upvoted for good measure, but there are a few downsides to this proposal that are worth considering.
As before mentioned, this would add a time period where no tiles are being sampled, and most of the compute devices are idle. Naturally, this would increase render time.
Secondly, storing a copy of the complete image (including all passes) would be necessary, adding to RAM usage. This could be somewhat mitigated with disk caching, but then you have to deal with additional latency when resuming a tile.
This is compounded by passes that want or need more detailed information on the tile being rendered, such as Cryptomatte passes.
About bucket stealing, is it available for testing with the last 2.92 Alpha?
If yes, have I to set something to enable it?
Iāve download the daily Alpha and, at the moment, Iāve set feature set as Experimental and Devide as GPU compute but I canāt see the CPU bucketsā¦ just GPU.
Where am I wrong?