My 2x 980ti sli is SLOWER than a single 780ti... WTF Blender?

Same BMW benchmark, my 2 980ti cards put together are currently slower than the one 980ti I had several months back and slower than an r9 290x.

like 1min 10 seconds for the two (both overclocked), last time I ran the benchmark with a single 980ti I got 58 seconds… They’re rendering slower than a single 780ti!?

So I took one 980ti out of the rendering and it’s not even close to the 54 seconds it was before.


I said it 6 months ago I’ll say it again, CLEARLY the high end Maxwell cards have never been properly optimized.


Also, do 2 cards help with the render-view mode?

Seriously, It should at least be like, oh I don’t know… TWISE AS FAST AS IT WAS BEFORE YOU GIMPED IT!?
These cards don’t grow in trees, I’d have thought a combined butt-load worth of cash and hardware might make things faster… like I had hoped and planned for.

What’s your OS? Win 10?
There are quite a few reports around here indicating that there is some kind issue with Win 10, the 980tis and some Titan variants.

Other than that: Make sure you don’t have SLI enabled.

On a sober note: It is not Blender bugging it. It’s at the nVidia camp.
See this:

Yes, always the same issue 980TI/Titan X + Win10. Go Linux, from my tests it runs without any issue :wink:
Or go to Octane ^^

You also mention SLI in your title… Never use SLI when doing rendering… it has been mentioned many times across these forums and on the blender wiki. SLI is for gaming, not for rendering.

Dual cards (non SLI) is way better performance then SLI

It’s a cycles issue too, on octane my M6000 is faster than my 780 and on cycles the M6000 is slower than my 570.

Oh, sorry, everyone was just waiting for you to post this. Now it can be fixed: snaps fingers. There, done.

In all seriousness, you must have no idea what it is like to program for GPUs. To they layman, I can best describe it as performing a ritual chant in a language that sounds familiar, but where the words all have slightly different and mystical meanings. You perform this ritual chant in the hopes that the gods bestow compute performance upon you. However, you can not see these gods, they only manifest themselves in the forms of signs, which you must interpret accordingly. You must also be careful not to anger the gods with the wrong words or sentences, or they will take the performance away from you. Sometimes, they will take the performance away from you for no reason, because they are jealous and angry gods.

As far as I know, nobody (not even NVIDIA) knows the cause for this performance regression. Surely, it’s something in Cycles code that is causing it, but the fault may very well lie in the CUDA runtime in combination with the Windows 10 driver model, which Blender developers have no control over nor insight into.

Maybe the solution with the new drivers ?

The quadro drivers are still in 364 so I have to wait to test that.

Maybe the solution with the new drivers ?

Pie Nvidia Domine, dona eis requiem!

Moved post from “General Forums > Blender and CG Discussions” to “Support > Lighting and Rendering”

It’s also worth mentioning that on scenes that take around a minute or less to render, performance is actually worse if you render on all of the cards with a single instance of Blender. This is because (AFAIK) there’s a bit of additional time necessary to collect the render results from each card and stitch them together. That time a fixed short duration, so it’s negligible on larger/longer render jobs, but on shorter render jobs, it’s got a much more significant impact.

I ran into this with a machine I render on that has 4 Quadro K6000s in it. For animations, I ended up writing a little render script that launches 4 separate instances of Blender, each one tied to one GPU. Overall rendertime was much shorter with that setup than one instance of Blender using all 4 GPUs.