Cycles: GeForce GTX 980 Ti vs Server NF8420M3

The comparison is to know which of the two is faster?

GeForce GTX 980 Ti
Server NF8420M3. 4 Xeon E5-4650 processor x 8 core (32)

It is to know what the best choice of performance and economy,the GTX 980 obviously need the hardware.


ok ! one xeon e5-4650 is almost 20% slower than a single i7 4770k however 1gtx 970 is 4X faster than the xeon e5-4650 and 1 gtx 980 TI is like 2 gtx 970 so 1gtx 980 TI is 8X faster than one xeon e5-4650 that said.


GTX 980 TI ======>> 2X(4X xeon e5-4650 32 cores)
R9 390X ======>> 2.5X(4X xeon e5-4650 32 cores)

@sharlybg, Do these results include Volumetrics, Hair, SSS and motion blur?
I ask because those things slow down much the GPU and files for benchmarks usually do not include these things.
It is also necessary to consider features that are not supported under GPU (even more in AMD).
But I really do not know anything about Xeon processors.

Volumetrics Hair and SSS are needed but not the Main used features in CYCLES. so they are also slower on CPU too ! but you have to make a choice !

2X faster rendering for most of your works with some come back to cpu rendering in extreme situation when necessary !

or 2x slower rendering compared to gpu based rig on 80% of your project but little bit faster in Volumetrics Hair and SSS !

yes you pay one cpu for GPU rig but you also need good GPU to handle heavy scene that have to be rendered on Quad xeon workstation not forget that you have to pay 4X the DDR memory in the XEon. and xeon based one is for sure the most expenssive !

Those things slow down much more the GPU than the CPU (at least in my tests)
But, Cycles is designed to get good realistic scenes, right?. So how is it possible that SSS in organic textures, smoke, hair, nice translucent materials and volumetrics are not mainly used in Cycles?
I really do not know if it would have been possible to obtain such good results in Cosmos Laundromat under GPU:

I agree with what you say about the choices that must be taken depending on the needs. “AlexB3D” should consider in what kind of scenes he plans to work, and if he really needs those expensive Xeon CPUs.

little test done with HAIR + Translucent + glossy shader on suzy !

i7 4790k ===>> 202 secs

R9 390 ===>> 126 secs

so the most missing part is from AMD side if you want them as main device (" Transparent shadow") !

BMW benchmark in my GTX 960 is 2x faster than my i7 3770. But in the next scene:

GPU: 4:02.90 (480x270)
CPU: 1:07.56 (32x32)

I think if I would have used Motion Blur in animation or Volume Scatter, the difference would have been even greater in favor of the CPU.

You have better hardware than mine. Could you try this scene?. Remember to use small tile sizes for CPU. And check that “experimental” kernel is enabled for GPU.

BMW benchmark in my GTX 960 is 2x faster than my i7 3770. In the next scene:
GPU: 4:02.90 (480*270)
CPU: 1:07.56 (32x32)

don’t understand here !:confused: your GPU is 4x slower in bmw . why ?

so where is the scene your talking about ?

Sorry, edited. There in the message above is the scene.
What I am saying is that my GPU is twice fast as my CPU in BMW scenene. But it is much slower in this test scene I created.

R9 390 ===> 21 scs

i7 4790k ===> 71 scs

but there is no real deal here as your using experimental features (not yet optlmized for gpu) and on amd the result are totally different !

so i’m doing archviz ! and what i do is to compare usability on heavy architectural scene !

My time with BMW V2 (with the two orange cars): one tile no spatial split :

R9 390 nitro =====>>> 55 scs

old BMW

R9 390 nitro =====>>> 25 scs

Please, nvidia only here. I think ATI does not support a lot of features it has that scene. So it does not give a true result with ATI

I activated experimental kernel only for SSS on GPU. If you want you use the normal kernel without using SSS. Anyway all transparencies and volumetrics used in this materials and the particles system will slow down GPU.

It is strange that your i7 is slower than mine. Are you using Windows?

sorry already sold my lastest Nvidia 970’s for r9 390s !

But i’m searching for clever tricks to avoid transparent shadow mistake on AMD if you have some nice node tree you’re welcome !:cool:

so i’ve already trying diseable shadow of the concerned object in cycles setting. It work but better way is by material node trick !

Hi Yafu, I ran your scene on my 980 and I7 5830k with the following results:

GTX 980 - 01:46.74 (480x270)
I7 5820k - 00:37.74 (32x32)

I suspect that it’s just the CUDA kernel needs some updating and optimization?

For comparison I re-ran the BMW27 test scene:

GTX 980 - 01:10.88 (480x270)
I7 5820k - 02:09.51 (32x32)


Hi Grimm, thanks for doing the tests.
I hope it is only a matter of lack of optimization, and not a GPU limitation. In another thread Ace Dragon commented that some work related to CUDA split kernel have already been initiated. But I do not know if it will bring improvements in speed for these complicated materials.

Hello. @Grimm (He had not seen the answers).
It’s for a render farm. I have noticed that CPU performance has advanced. The performance of 4 Xeon is greater than that of a core i7.

The question: Which is faster?

PC, GTX 980 Ti, 16Mb Ram, etc. Approximate cost $ 3,000.
This Xeon server has 32 cores. There are four eight-core processors. Ram 128Gb. etc … Approximately $ 3,000


The GTX 980 Ti will be faster, heres why.
The 2.7Ghz version of the Xeon E5-4650 is based on the sandy bridge architecture and it’s almost 5 years old now, even with 8 cores and 4 CPU’s it’s still lacking in rendering performance compared to a modern high end GPU. These figures from LuxMark show that a single quad core sandy bridge based processor scores 1,400,
In your case you would score about 11,200 since you would have 8 cores per CPU instead of 4, and 4 CPU’s instead of 1, (1,400 * 2 * 4)
However a GTX 980 Ti scores about 19,000 with LuxMark

I have no idea about these Xeon processors. It is difficult to find Blender benchmarks results of these CPUs/server.
The only thing I can say is that speed difference between one GPU and one CPU is not always the same. It depends on the elements involved in the scene.

Since you are about to spend a lot of money on those Xeon, Could you ask the seller run some Blender benchmarks on that server?

You can use that scene that I shared above, that’s a difficult scene for GPU.

Gracias YAFU, pero yo no voy a hacer es inversión, estoy haciendo el diseño de producción de un estudio y tiene esas dos opciones, aunque a mi parecer, es conveniente tener las GTX para aprovechar el potencial para modelar, el tema de los servidores es que se producen nacionalmente y se puede acceder mas fácil que las tarjetas porque se compran en dolares… Hacer las pruebas no es posible ahorita, porque es por financiación y todavía no está “la pasta” :smiley: