Cycles Development Updates

A clarification for everyone. You probably don’t have much better render times with Optix on non-RTX cards. The interesting thing is that Optix denoiser (for final render and viewport) can also be used with CUDA or CPU. So probably from System settings you want to stay in CUDA for GPU, and anyway you can use the denoiser.

Edit:
When I clarify that for OptiX denoiser it is not necessary to choose OptiX from preferences and when I said that it may be convenient to stay in CUDA, this is mainly because with CUDA we can use GPU+CPU.

1 Like

What ever floats the boat even if it is not RTX based.

Luxcore 2.4 with viewport denoiser is killer.

I denoise 1 sec long renderings already.

the workflow now is mind blowing. Lux was always to slow. LuxCore was a great improvement.

With the denoiser this slower engine jumped fast into the default render engine for archviz work.

is there anything coming soon
to identify the blender version file when it was created then decide
to open it or use another older blender version ?

i have many older blender files done with older blender version like 2.79
and want to keep these like that!
don’t really want to update the file to higher version - risk of loosing some effects or change the render look

thanks
happy bl

yeah, naming conventions :wink:

Information is saved with the blend file, you can access it in the outliner, blend file view, version

i check the blend file data and did not see the blend version!

is there some addon that can allow to open a file or not function of blender version when created ?

thanks
happy bl

I have downloaded the latest 2.90 build (built on the 7th) but i’m still not getting the viewport denoising option on the sampling tab. Is there a lower limit on which Maxwell cards this will work with?

I haven’t to tried it yet. I know it’s possible because it works in Bone studio builds but maybe this commit means something else.

I can confirm that with 2.90 today’s build (aed11c673efe) the 3D Viewport realtime denoising is working with my Nvidia GTX 1070. It shall be activated in the render settings under Sampling.

Yep I have the same build - the viewport denoise option isn’t showing for my card (980Ti) using either supported or experimental features.

image

image

It works on my GTX 960, so it should work on your card (experimental features are not required). Any chance that you accidentally downloaded 2.83 beta instead of 2.90? What is your operating system?
Edit:
Oh you said you have aed11c673efe build. Weird that OptiX denoising options do not appear…

Edit 2:
The only thing I can think of is that you may have old nvidia drivers installed

I did try updating drivers - but it said there were no new ones available. I’ll try doing it manually.

I’m on Windows 10.

Yes, you try downloading latest drivers from nvidia site and doing a clean install (there is some option in the installer to do it, but I don’t remember the exact name).

Wow I just try optix on my gtx 1080 and it’s amazing , I can’t believe this I have real-time preview on viewport ,really Eevee now it’s useless :grinning:
Also rendering time with optix has decreased around 10-12% !!!

1 Like

I have compiled the latest 2.90 alpha build from git with optix and cuda with success(I enabled optix in cmake gui).The optix viewport denoiser works with my old 970gtx (Driver 446.14)
However if i load older files/materials it only shows “cancel”.I have read that AO and bevel shader is not supported yet with optix.But it happens with simpler glossy materials too.

Dit someone knows what can be the reason for this “cancel” or what exactly is not supported,to sort things out?!

Yep - that’s what it was - option is now showing. Thanks.

1 Like

Currently disadvantages with OptiX are:
GPU+CPU rendering is not possible.
Not complete supported features (like Bevel and AO nodes). But this should only be a problem for rendering with OptiX, not for OptiX denoiser when rendering with CUDA. So we had asked about it but got no answers from developers:

@pixelgrip, Any simple scene that you can share?

1 Like

sure,I have deleted unused notes, to keep the file small, and packed a test file.Then i reloaded the testfile to see if it loads and surprise the optix denoise worked.There was a unconnected AO shader in the material maybe that was the reason?I have to look other materials.

If i find a file thats not working i load them up.thanks

I have just done some testing - and that appears to be the case with the 980ti too.

BMW scene rendered at 800x600 - 80x60 tiles, adaptive sampling set to 0.01

CUDA (GPU Only) = 29.24 seconds
Optix (GPU Only) = 21.13 Seconds (a 27% increase)

CUDA (CPU + GPU) = 19.60 Seconds

So it appears that Optix (with these settings) speeds up GPU rendering to be almost comparable to using the GPU and CPU combined.

Tested rendering on a single 800x600 tile:

CUDA = 13.19 seconds
Optix = 13.41 seconds

In this case, CUDA was faster, so it appears that Optix only gives a speedup when rendering with multiple tiles.

Tried an intermediate case 400x300 tiles (4 in total)

CUDA = 21.44 seconds.
Optix = 14.11 seconds.

Again - Optix is faster, seemingly confirming the “multiple tile” theory.

Final test turning off adaptive sampling, same tile size as previous test. Overall number of samples is 400

CUDA = 37.06 seconds.
Optix = 35.30 seconds.

This one is curious as i’m not seeing anything like as significant a speedup as I get with adaptive sampling active. I still get an slight speed boost though.

1 Like

I have just graphed up the results above (along with a couple more to fill in the blanks).

It appears that Optix rendering is far less impacted by reducing tile sizes than CUDA is when using adaptive sampling. The render times for Optix climb slowly and linearly as the tile size decreases, whereas using CUDA, there is a huge initial speed penalty for reducing the tile size.

image

When not using adaptive sampling, the data looks like this:

image

In conclusion - it appears that if you can render using a single tile, CUDA is probably the better option and gives the fastest overall render time. It gives slightly better render times compared to Optix whether using adaptive sampling or not.

If rendering more than one tile, then Optix conveys a huge advantage over CUDA for either normal or adaptive sampling. This advantage diminishes for large numbers of tiles, so it’s best to keep the number of tiles low. In my tests, between 8 and 16 tiles seemed to give Optix the biggest advantage over CUDA.

5 Likes