Cycles Development Updates

To me, that means the sun interior scene should perform better than 1%. There should be direct light from the sky as well. If you need 10000 samples to clear the dark corner (indirect only), it should stop at 1000-2000 for the rest of the scene that doesn’t require that much. Today I would have to live with 10000 across the whole scene, live with the noise, or remove the sun (preferred :p).

Isn’t that what adaptive means?

Totally, and I bet it would work much better with more samples, I was lazy and didn’t really commit to enough samples to make it work. I highly recommend throwing some of your scenes at it and seeing how it fares. In some of my scenes, I got an easy 30% speed up for the same level of noise.

As some have already stated, adaptive sampling is not primarily intended to speed up render times.
Adaptive sampling stops sampling areas of the scene which have reached the set noise threshold.

If no part of the scene is below this set threshold then there will be no speed gain. On the contrary, it will be a bit slower than uniform sampling because of the additional noise estimation calculations during sampling.
All it does is to stop sampling converged parts and thus allowing all remaining samples to be distributed to the more difficult - still non converged scene parts.

One advantage is that the final noise is more evenly distributed throughout the scene, helping some denoising algorithms.

Yes it’s a way to prevent unnecessary sampling of pixels with good convergence, but there is a side effect when samples are high: much faster rendering.

5-10x slower with deformation motion blur is not unusual, and that is why I wrote the Embree integration. It’s in master, but not enabled for the builds from You’ll need to either compile it for yourself or find builds with Embree that I’m sure are floating around somewhere (graphicall maybe).
Another option is increasing “BVH Time Steps” in a regular Blender build, but that can come with a significant increase in memory usage.

No. It’s not psychic Cycles, it still will have to find those easy to miss samples with brute force. This patch for adaptive sampling does not change where samples go, it only changes the number of samples.

This CPU+GPU tiling artifact is exactly what I expected and mentioned in the patch tracker. When you go to higher quality settings (lower threshold, more samples), it should eventually disappear.

Tackling this properly will take a little work, since either we’ll have to put a damper on GPU performance by forcing it to render in the same small increments as the CPU does (as opposed to crunching a large number of samples at once) or multi-device setups will need to communicate their capabilities to each other and agree on a common denominator of how many samples to render at a time.

@J_the_Ninja this is probably the same thing you’re seeing, my guess is that those are different GPUs or only some of them are connected to a display?

1 Like

Thank you for your reply Stefan!
Its a great news, I was afraid its already using your code already no chance for better performance!
I ll look on the custom build…any chance devs will “publish” or enable Embree optimizations in blender build soon?


Well, it’s working as expected. Adaptive sampling means it will stop sampling early when a pixel has converged beyond the threshold. In this case, there’s still plenty of noise everywhere, so everything receives plenty of samples.

You could just set the sample count to something like 16384, adaptive min samples to the smallest sample count you can tolerate (4? 64?) and then use the adaptive threshold to select the desired noise level. The closer to 0, the more noise is reduced.

For this example, Adaptive Min Samples was set to 4, (max) samples were 8192 and only Adaptive Threshold was changed:


Yes. In fact, adaptive sampling, as it is implemented right now, with a given sample count will at best give you exactly as much noise as without, but never less noise. It will in most cases however do that in significantly less time.

Which means that you can increase the sample count appropriately and in the end get less noise at equal render time.


Thanks, that clarifies things a lot.

Yes on both counts. Thanks!

Can any of you reproduce this error with BMW27.blend scene using nvidia GPU?

CUDA error: Illegal address in cuCtxSynchronize(), line 1784

I am on Linux. Using these configurations:
Tile size: 32x32
Samples (not square samples): 16384
Adaptive Min samples: 4
Adaptive Threshold: 0.001

Hi and yes, can verify render stop after a few tiles with same error message.
I had to rollback to hash 22bc9fb4a9e to get Adaptive Sampling patch to work. This was 2. April or something.
I have also OIDN patch enabled.
Other files are working fine.

Opensuse Leap 42.2 x86_64
Intel i5 3570K 16 GB RAM
GTX 760 4 GB /Display card
Driver 418.56
Vivaldi 2.5.1511.4 (Official Build) (64-bit)

Cheers, mib
EDIT: I had problems with 2.7 files but was exploding RAM or particle systems, not Cuda errors.

1 Like

my try with adaptive sampling:
AS on, 18 min 15 sec, 20k samples, threshold:0,9:

AS off, 18 min 15 sec, 20k samples, threshold:0,9:

I see no differnce in time or noise level

Hi, your threshold setting is very high, check 0 for automatic or 0,05.
As your test file have more than 50% “blank” areas it should be very good for AS.
If you post the .blend I can test.
Will add some example images later on.

Cheers, mib
EDIT: Struggle with settings at moment, noise is same with 0.5 and 0.01. ?
Doing more test now.
This is my test .blend:
Credits to:

Give up for now, E-Cyles need 25 Seconds anyway. :wink:

Cheers, mib

Im not really into 2.8 yet so I decided to back port the adaptive sample patch from master to 2.79 latest source in case people want to use it on non blender 2.8:

Diff: adaptive_sample_279.diff (56.4 KB)


Here’s a 2.79 latest source build for windows with OIIO AI denoise & SKW adaptive sample patch. Enjoy:!x1pwFCLA!_MYk2hAtSO5ByXoHvDphnH4fWFS3GGAAuQSpmvEo4Ns