Cycles Development Updates

@shteeve as a confirmation of what @DeepBlender said, here is the code if you want to read it yourself.

{
  /* Devices that can get their tiles stolen don't steal tiles themselves.
   * Additionally, if there are no stealable tiles in flight, give up here. */
  if (tile_device->info.type == DEVICE_CPU || stealable_tiles == 0) {
    return false;
  }
1 Like

Check this thread: Idea for increasing hybrid render speed - #2 by lsscpp

@Kologe
Ah! This reminds me this old RCS of mineā€¦

1 Like

Double nope here: first you would lose time rendering the low resolution, second there might be tiny diffucult areas that make a tile ā€œdifficultā€ that might disapper in a lower resolution.

I havenā€™t tried it directly, but have been following the thread with a microscope, there are definite improvements in almost every render, more noticeable in scenes with strongly tinted lights, and you have to define the color of your bsdfs in absorption terms, they have made a new spectrum curves nodes just for this itā€™s wonderful. I seriously dig this approach as this seems the natural evolution of rendering in general, I mean getting closer and closer to light physics. And the images look SO much more natural itā€™s stunning I think. Same diffraction effect that you get with chroma abberration node, but accurateā€¦ hence believable !

1 Like

As going from a lower resolution to a higher one results in the difficult areas scaled in proportion to the image, they donā€™t represent added difficulty in terms of the number of rays they require to converge, I mean you can just multiply the rendertime by four if you multiply the pixel number by four (not including microdicing which being done in pixel space is probably adding some more time to the total)

What I mean is the difficult areas donā€™t disappear they just scale down, unless when you get to a scale where monte carlo sampling is not representative anymore, that means an object being under one pixel big and gets <totalsamples, but I guess this is possible with a high frequency material or very small objects scattered with very different materials that would pass the ~1px threshold when scaling up the render resolution.

Aaaaaanyway

I meant, depending of the scale (how much smaller the preview would be? not too big i guess) details can totally disappear
As in my old RCS, i think the most versatile solution would be to have a ā€œfirst passā€ of very low sampling (one? ten? a percentage of total?). So you would have a relatively quick preview (useful also to check trivial mistakes like forgotten objects around, turned off lights, collections etcā€¦), Cycles could evaluate how difficult each tile is, and also could exactly estimate the final rendertime. Not only this: Cycles is actually able to not discard the data of those first samples, and to continue the calculations from there!
The idea could be expanded like, having an animation saved every X samples, to use it for prelminar video editing

2 Likes

Yeah youā€™re right I actually ended up here in my rambling above. High frequency details have gotta be the right size to get squashed in the smaller image but thatā€™s quite likely, glints etc

1 Like

Just out of curiosity I checked my suggestions log on RCS and I suggested the different bucket sizes back in May '19. It only got 7 upvotes though.

3 Likes

Upvoted. :slightly_smiling_face::+1:

1 Like

Sounds good, thanks!

Yeah, spectral wavelength rendering was introduced by Maxwell Render in the mid-2000s, and Maxwell is still one of the most realistic renderers Iā€™ve ever used.

Having said that, I donā€™t want to lift spectral rendering to a holy grail, as Iā€™ve recently tried Octane, which is a full spectral renderer, and I noticed no significant visual advantages compared to LuxCoreRender, which is an RGB spectrum renderer.

I guess spectral advantages become particularly visible when using specific effects, such as dispersion.

1 Like

I donā€™t think theyā€™ve yet made the necessary changes to closures so that they do diffraction (or thin-film interferenceā€¦). However seeing materials react to tinted lights was enough to convince me. You do good to mention thereā€™s more to it ! Anything immerged into a volume (a body of water say) will also change naturally according to the specific absorption of that medium -there are some ā€œunderwaterā€ images in the other thread you can check out !
:smiley:

1 Like

For what it matters, my RCS got only 6 votes, and I canā€™t wrap my head aorund enough to understand why! It seems so easy and effective to me, maybe I just didnā€™t translate well enough in plain english? What would be the cons?

The tiles rendering speed are not the same.IE you have a Volumetric tile part that takes loger vs a black backround tile with nothing to calc heavy.
The prob here is that you need to sample all tiles with 10 samples for the calculation,before.That takes time for initiation,before further tile rendering.

Upvoted! :smiley::+1:

1 Like

I didnā€™t understand the point

If i understand your idea right,then you want to calc 10 samples of each tile, for estimation of rendering speed?

Upvoted for good measure, but there are a few downsides to this proposal that are worth considering.

As before mentioned, this would add a time period where no tiles are being sampled, and most of the compute devices are idle. Naturally, this would increase render time.
Secondly, storing a copy of the complete image (including all passes) would be necessary, adding to RAM usage. This could be somewhat mitigated with disk caching, but then you have to deal with additional latency when resuming a tile.
This is compounded by passes that want or need more detailed information on the tile being rendered, such as Cryptomatte passes.

Yes, with the added benefit of a fast preview. Cycles then is able to continue frome there, there would be no waste of samples

About bucket stealing, is it available for testing with the last 2.92 Alpha?

If yes, have I to set something to enable it?
Iā€™ve download the daily Alpha and, at the moment, Iā€™ve set feature set as Experimental and Devide as GPU compute but I canā€™t see the CPU bucketsā€¦ just GPU.
Where am I wrong?

yes it is alpha
it must be turn on CPU and GPU both render in system