… but also, is it possible that there’s some issue with software drivers, OpenGL layers, and so on? Does Blender produce any significant console-messages? How about your operating system’s global event/message recorder?
My ears picked up when you said "much worse," as in, "human-noticeable worse." My intuitive suspicion is that this might well have a software cause . . .
Your peek memory usage in that render is under 40mb. Try again with a heavier scene (A lot of polygons, large textures, etc) try to get that number up in the hundreds or even thousands. I think the results will be a different story.
According to Nvidia spec sheets for the two cards, the 1060 has a slightly faster clock speed. However, the 2060 has much faster memory. Such a light weight scene isn’t enough to really put that card through it’s paces, and may only highlight the difference in clock speed.
off-topic
Every time when I look at render time in blender, I’m wondering, It’s hours, maybe minutes, or maybe it’s seconds. For me, it is very difficult to decipher these values.
Once upon a time I suggested to make it possible to switch to a more convenient display of the render time, but … https://blender.community/c/rightclickselect/Ztbbbc/render-time-separate-units
If you’re dependent on Cycles and fast rendering times are crucial, you might want to have a look at E-Cycles.
If you’re using Blender 2.79 and wouldn’t mind switching to a different renderer, check out the latest version of LuxCoreRender for fast rendering times.
Nvidia has the habit to change things when releasing new cards (in the CUDA field, they tweak lots of things without warning). Blender devs have to adapt the cycles code to keep up with the changes.
It’s not the first time it hapens, and it will not be the last time. Report a bug and play the waiting game for now. That’s the price to buy Nvidia’s new generation of GPU’s at launch.
I was wrong because I thought supported the new generation cards means we gonna get at least the same performance without the benefit of the RT cores. But clearly see there is lot more under the hood. I hope so it will be fixed soon.
Also don’t forget the devs are very busy with the 2.8 release, and if they have to choose between a crashing bug that affects all users vs a performance regression that affects a small number of users they tend to go with the first one.
Sure, I personally have a lot of trust in the core devs, they understand this game very well. I am sure they will get to the issues with newer cards once they are done with the 2.8
Did you try what I suggested above? Again, the 1060 has a faster core speed. Normally that’s not comparable between different cards, but until you throw some weight in the scene to test the 2060’s higher memory bandwidth, you’re not getting an accurate picture. It may turn out that there IS an issue regarding the card, but let’s start with a proper test.
Use one if the demo scenes:
Edit: Put another way, the 1060 is a single lane road with a speed limit of 80mph. The 2060 is a 10 lane highway with a speed limit of 70mph. If you’re only testing with one car, of course the single lane road will be faster.
Just to give you my experience with my rtx 2070s.
I replaced a Vega 64 with two 2070s. On the first try in Linux with the newest proprietary drivers both outperformed the Vega. On Windows the performance with autoinstalled drivers was really underwhelming. Almost double the render time of the Vega. With the newest dri er from they’re site it got better, but still needed around 150% of the Vega. It was only after I cleaned the setup with ddu from old drivers and installed the latest, that they also could beat the Vega in Windows. Still slower than under Linux but in an expected range.
Now I don’t know how the 1060 and the 2060 should be expected to compare. But it might help you to consider the ddu approach.
Also I don’t think you should not try to draw conclusions regarding Blender/ Cycles based on how the card performs in Octane.