Cycles much worse performance with RTX cards

Your peek memory usage in that render is under 40mb. Try again with a heavier scene (A lot of polygons, large textures, etc) try to get that number up in the hundreds or even thousands. I think the results will be a different story.

According to Nvidia spec sheets for the two cards, the 1060 has a slightly faster clock speed. However, the 2060 has much faster memory. Such a light weight scene isn’t enough to really put that card through it’s paces, and may only highlight the difference in clock speed.

Hi, check with Blender Benchmark > https://opendata.blender.org/
or with the Demo files > https://www.blender.org/download/demo-files/

Cheers, mib

Same Blender versions? You need to make sure you are using the exact same git branch between tests

off-topic
Every time when I look at render time in blender, I’m wondering, It’s hours, maybe minutes, or maybe it’s seconds. For me, it is very difficult to decipher these values.
Once upon a time I suggested to make it possible to switch to a more convenient display of the render time, but …

1 Like

Yes, same Blender version in every case. I’ve tried different drivers, but I got the same numbers also.

Next I will try the benchmark scenes with and without the tiled rendering. I have the feeling that nothing changes.

Where can I report to devs? I made the same topic on Devtalk: https://devtalk.blender.org/t/cycles-much-worse-performance-with-rtx-cards/5856
So far nobody has been listening.

If you’re dependent on Cycles and fast rendering times are crucial, you might want to have a look at E-Cycles.

If you’re using Blender 2.79 and wouldn’t mind switching to a different renderer, check out the latest version of LuxCoreRender for fast rendering times.

what about regular renders ? are they affected as well or is this just about the preview ?

Tiled rendering:

Geforce RTX 2060 6GB - 6:30

With progressive refine my “old” GTX 1060 is way faster than my RTX 2060. This is clearly abnormal.

Not abnormal at all.

Nvidia has the habit to change things when releasing new cards (in the CUDA field, they tweak lots of things without warning). Blender devs have to adapt the cycles code to keep up with the changes.

It’s not the first time it hapens, and it will not be the last time. Report a bug and play the waiting game for now. That’s the price to buy Nvidia’s new generation of GPU’s at launch. :frowning:

3 Likes

I was wrong because I thought supported the new generation cards means we gonna get at least the same performance without the benefit of the RT cores. But clearly see there is lot more under the hood. I hope so it will be fixed soon.

Well, it will be hard if the devs do not have those cards.

Also don’t forget the devs are very busy with the 2.8 release, and if they have to choose between a crashing bug that affects all users vs a performance regression that affects a small number of users they tend to go with the first one.

Sure, I personally have a lot of trust in the core devs, they understand this game very well. I am sure they will get to the issues with newer cards once they are done with the 2.8

image

Did you try what I suggested above? Again, the 1060 has a faster core speed. Normally that’s not comparable between different cards, but until you throw some weight in the scene to test the 2060’s higher memory bandwidth, you’re not getting an accurate picture. It may turn out that there IS an issue regarding the card, but let’s start with a proper test.

Use one if the demo scenes:

Edit: Put another way, the 1060 is a single lane road with a speed limit of 80mph. The 2060 is a 10 lane highway with a speed limit of 70mph. If you’re only testing with one car, of course the single lane road will be faster.

I’ve done some tests of course.

OctaneBench 4.00:

GTX 1060 - 85 score
RTX 2060 without RTX mode on: 167 score

I’ll try the Demo Files later, and share the results. I think there isn’t a single scenario where the 1060 could approach the speed of the 2060.

Just to give you my experience with my rtx 2070s.
I replaced a Vega 64 with two 2070s. On the first try in Linux with the newest proprietary drivers both outperformed the Vega. On Windows the performance with autoinstalled drivers was really underwhelming. Almost double the render time of the Vega. With the newest dri er from they’re site it got better, but still needed around 150% of the Vega. It was only after I cleaned the setup with ddu from old drivers and installed the latest, that they also could beat the Vega in Windows. Still slower than under Linux but in an expected range.

Now I don’t know how the 1060 and the 2060 should be expected to compare. But it might help you to consider the ddu approach.

Also I don’t think you should not try to draw conclusions regarding Blender/ Cycles based on how the card performs in Octane.

Thanks for your experience and the tips Markus!

What I’ve tried so far:

  • Windows 10 1803 and 1809
  • Different drivers
  • With and without DDU
  • More Blender 2.80 and 2.79 versions

How do you get the Vega64 to run properly on Linux (Mint Cinnamon in my case)? Everyone told me to go AMD because they were natively supported by the Linux, but I missed how that didn’t mean OpenCL support which is crucial to me. I’m a complete Linux noob.
Currently rendering on 8 cores.
Currently wishing I went for Threadripper instead.

You can just install the official driver normally. You have to edit a file to make it believe it’s vanilla Ubuntu, that’s all (version number from 18.04 to 19): https://www.reddit.com/r/linuxmint/comments/986ncb/failed_to_install_latest_amd_drivers_on_mint_19/