Cycles much worse performance with RTX cards

The devs aren’t even interested in fixing it.

Stop using progressive refine or at least there’s no reason to complain about or bring up the speed right now.

Tiled rendering is also affected. And if you read it carefully you can see there was already an another slow down because of reflective caustics. But let them decide in which case to care about.

I’m not complaining because I’m not happy with the speed of Cycles with progressive refine. I’m complaining because it was much slower since 2.79b. There’s a difference.

Yeah turning off caustics does speed up things indeed, but it also takes away a good amount of brightness. Especially with this scene, where light is coming from outside the room.

Turning off caustics was a speed up in Cycles in 2.79 too, it’s not a new thing.

More data because I just got a new machine up and running at work:

light_bounces2(16x16 tiled, caustics off) Win10 GTX 1080 Ti SC 2m 18s
light_bounces2(16x16 tiled, caustics off) Win10 RTX 2070 3m 8s

The 2070 has 30% less CUDA cores than the 1080 Ti, I see about a 30% difference in speed. Simple analysis seems okay to me, not even taking into account memory speeds if that’s a factor.

I gave that scene a quick spin in the profiling tools, and it does indicate that my RTX card is spending most of its time idling, the CUDA cores are waiting for instructions to be loaded from memory. In other words, that scene is too simple for Cycles’ code and the GPU is slow at skipping over unused code.

Nvidia writes that their GPUs have instruction caches but does not disclose any details - it may be that those changed between the10x0 and the 20x0 cards and that the 10x0 series was just a lot better at dealing with Cycles’ megakernel.

Either way, a simple workaround is available if you are rendering on Linux. Make sure you have the CUDA SDK installed, then enable the Debug panel and check “Adaptive Compile” under CUDA flags. Blender will then compile a Cycles kernel on demand with only the features that the scene is using. In the case of my RTX 4000, this brings render time on that scene from over 6 minutes down to 1 minute and 45 seconds.

4 Likes

Thank you for your detailed answer Stefan! If I understand you correctly, that the problem is not only about RTX, but rather general.

  • Can I ask when is the fix expected?
  • As I see you are a developer, do I need to report this bug after you saw it?

The workaround you gave is very promising, thank you for that. As soon as possible I will install a fresh Ubuntu with the CUDA SDK.

I can’t say anything about that. I don’t know if or when someone will be working on that.

Yes, please. It’s good to have a ticket in the system.

Here is a screenshot from render preview. Yes, it’s madness.

But the final rendered image looks very different comparing to the render in the first post.

What caused this?

Caustics missing? Clamping enabled? Simplify enabled?

Can you try to change the tile size and test it again.It seems to me Blender 2.8 is in favor of smaller tile size. For example, when I render BMW27 on Blender 2.8 with RX 580, with tile size set to 256x256, the rendering time is 5+ minutes, when I switch the tile size to smaller like 32x32, the rendering time cut down to half and less to around 2 minutes.

1 Like

Good point. Smaller tile sizes should be preferred since the addition of hybrid rendering and denoising.

Oh, and welcome to BA.:wink:

I actually found smaller file size in general work better with only GPU too.

I think 2.80 is not very stable yet.

Bought a Zotac RTX 2070 AMP Extreme Core version yesterday, installed the GeForce 430.39 driver. Tested the BMW rendering, the result was very disappointing, about 4 minutes rendering time. Way slower than my previous AMD RX 580 card (2 minutes 15 seconds).

Then I tried to download the daily build of Blender 2.80 of May 8th 2019, it crashes whenever I tried to open the demo file. Will have to try again with today’s build when I get chance.

The main reason I bought RTX 2070 is that it’s ranked on the very top on Blender Open Data site:

Hopefully, this can be fixed on the new builds.

you probably tested the wrong bmw benchmark
the latest build crashes because of the changes/optimization made by clement , tomorrow builds will be ok

Thanks for the reply.

I actually tested exact same demo file that I tested with my RX 580. https://download.blender.org/demo/test/BMW27_2.blend.zip

I simply just switched the card last night, uninstalled AMD drivers, installed GeForce driver, and then ran the rendering with exact tile settings (32x32). RTX 2070 took 4 minutes to render the BMW gpu demo file. I then did some game testing, RTX 2070 was like 2x faster than RX 580 on fortnite. So the card seems to perform fine.

I will try run the test again tonight with the latest Blender 2.80 build see how it goes.

Finally figured it out. It was due to the switch over from AMD to NVIDIA, there is system setting in Blender Preference needs to be updated. Even though I don’t recall I ever manually changed the system setting in blender preference to “OpenCL”, but because I was using AMD card, it was set to “OpenCL”. Now after switching to RTX, the setting was still “OpenCL” which is not supported by NVIDIA card. After I changed it to CUDA. and did another rendering, the BWM gpu rendering took 01:16. Which is almost 2 times faster than my RX580. Bravo !!!

For people whoever switching between AMD and NVIDIA, this should be a very important step. Of cause it would be nice if Blender can auto detect the GPU vendor and choose the engine automatically.

2 Likes

The thing is nvidia gpus do support open cl, they just don’t support the newer/faster version that amd gpus support. There are probably some use cases where you would want to use open cl with nvidia cards (rendering with mixed brands?), so I don’t think they can afford to force nvidia cards to automatically use cuda.

In Blender with Nvida card, under Preference/System/OpenCL tab, it shows “No Compatible GPUs found”. I have another computer with Quadro K620, it also shows “No Compatible GPUs found”.

could it be a latest driver issue

Yes, it shouldn’t show nvidia cards when set to opencl by default. Since nvidia cards only support opencl 1.2, which is a lot slower than the version amd cards support (version 2), blender doesn’t list them under opencl unless you launch it using some obscure developer flag (When i had a hard time looking up so I gave up).

I was saying that since there are probably some rare cases where someone wants to force nvidia cards to use opencl, the developers wouldn’t be able to automatically switch the setting from opencl to cuda when the cards change.