Cycles much worse performance with RTX cards

After upgrading from a GTX 1060 to RTX 2060, the performance of the Rendered preview and Progressive refine also got much worse.

Progressive refine:

Geforce GTX 1060 6GB - 5:33
Geforce RTX 2060 6GB - 7:11

Tiled rendering:

Geforce RTX 2060 6GB - 6:30

Render times are in minutes.

Scene download: https://ufile.io/gfqjy

I was expecting about roughly twice the performance increase like in OctaneBench without the benefit of the RT Cores.

Due to the nature of the GPU, you take a risk when deciding to become an early adopter of a new generation of cards.

The thing to do is update your drivers, then report to the devs. if the regression is still there.

1 Like

… but also, is it possible that there’s some issue with software drivers, OpenGL layers, and so on? Does Blender produce any significant console-messages? How about your operating system’s global event/message recorder?

My ears picked up when you said "much worse," as in, "human-noticeable worse." My intuitive suspicion is that this might well have a software cause . . .

Your peek memory usage in that render is under 40mb. Try again with a heavier scene (A lot of polygons, large textures, etc) try to get that number up in the hundreds or even thousands. I think the results will be a different story.

According to Nvidia spec sheets for the two cards, the 1060 has a slightly faster clock speed. However, the 2060 has much faster memory. Such a light weight scene isn’t enough to really put that card through it’s paces, and may only highlight the difference in clock speed.

Hi, check with Blender Benchmark > https://opendata.blender.org/
or with the Demo files > https://www.blender.org/download/demo-files/

Cheers, mib

Same Blender versions? You need to make sure you are using the exact same git branch between tests

off-topic
Every time when I look at render time in blender, I’m wondering, It’s hours, maybe minutes, or maybe it’s seconds. For me, it is very difficult to decipher these values.
Once upon a time I suggested to make it possible to switch to a more convenient display of the render time, but …
https://blender.community/c/rightclickselect/Ztbbbc/render-time-separate-units

1 Like

Yes, same Blender version in every case. I’ve tried different drivers, but I got the same numbers also.

Next I will try the benchmark scenes with and without the tiled rendering. I have the feeling that nothing changes.

Where can I report to devs? I made the same topic on Devtalk: https://devtalk.blender.org/t/cycles-much-worse-performance-with-rtx-cards/5856
So far nobody has been listening.

If you’re dependent on Cycles and fast rendering times are crucial, you might want to have a look at E-Cycles.

If you’re using Blender 2.79 and wouldn’t mind switching to a different renderer, check out the latest version of LuxCoreRender for fast rendering times.

what about regular renders ? are they affected as well or is this just about the preview ?

Tiled rendering:

Geforce RTX 2060 6GB - 6:30

With progressive refine my “old” GTX 1060 is way faster than my RTX 2060. This is clearly abnormal.

Not abnormal at all.

Nvidia has the habit to change things when releasing new cards (in the CUDA field, they tweak lots of things without warning). Blender devs have to adapt the cycles code to keep up with the changes.

It’s not the first time it hapens, and it will not be the last time. Report a bug and play the waiting game for now. That’s the price to buy Nvidia’s new generation of GPU’s at launch. :frowning:

3 Likes

I was wrong because I thought supported the new generation cards means we gonna get at least the same performance without the benefit of the RT cores. But clearly see there is lot more under the hood. I hope so it will be fixed soon.

Well, it will be hard if the devs do not have those cards.

Also don’t forget the devs are very busy with the 2.8 release, and if they have to choose between a crashing bug that affects all users vs a performance regression that affects a small number of users they tend to go with the first one.

Sure, I personally have a lot of trust in the core devs, they understand this game very well. I am sure they will get to the issues with newer cards once they are done with the 2.8

image

Did you try what I suggested above? Again, the 1060 has a faster core speed. Normally that’s not comparable between different cards, but until you throw some weight in the scene to test the 2060’s higher memory bandwidth, you’re not getting an accurate picture. It may turn out that there IS an issue regarding the card, but let’s start with a proper test.

Use one if the demo scenes:

Edit: Put another way, the 1060 is a single lane road with a speed limit of 80mph. The 2060 is a 10 lane highway with a speed limit of 70mph. If you’re only testing with one car, of course the single lane road will be faster.

I’ve done some tests of course.

OctaneBench 4.00:

GTX 1060 - 85 score
RTX 2060 without RTX mode on: 167 score

I’ll try the Demo Files later, and share the results. I think there isn’t a single scenario where the 1060 could approach the speed of the 2060.

Just to give you my experience with my rtx 2070s.
I replaced a Vega 64 with two 2070s. On the first try in Linux with the newest proprietary drivers both outperformed the Vega. On Windows the performance with autoinstalled drivers was really underwhelming. Almost double the render time of the Vega. With the newest dri er from they’re site it got better, but still needed around 150% of the Vega. It was only after I cleaned the setup with ddu from old drivers and installed the latest, that they also could beat the Vega in Windows. Still slower than under Linux but in an expected range.

Now I don’t know how the 1060 and the 2060 should be expected to compare. But it might help you to consider the ddu approach.

Also I don’t think you should not try to draw conclusions regarding Blender/ Cycles based on how the card performs in Octane.