New PC - CUDA card slower than CPU

Got my new PC about 6 months back. Been doing some blender rendering on this machine and invariably the CPU is often faster even by 30% sometimes than the GTX 1070… GTX 1070 what a waste of money that was since i hardly ever game.(grown out of gaming now im in mid 30s).
Currently im trying the latest releases of Blender 2.79. My OS is windows 10(not sure how this stacks up to linux and such)… GPU is as i said(core overclocks a little to around 2GHZ). 32GB DDR4, and the CPU Ryzen 1700X 8core running at around 3.7GHZ. Auto tile size on.
Its sad the way the GPU industry has not advanced since my 2012 HD7970 card which would easily hold its ground against this GTX1070. 5 years and no real advancement, who’d have ever thought.
Anyways, turns out i lost a lot of money in cryptocurrency, and if i’d have played my cards right and known not to bother with a GPU at all, i could have afforded a thread ripper system.

So if you’re buying a new PC this is the reality, dont expect a CUDA card to blow away a good CPU by 10X like the old days. Its not the reality anymore. Sorry if it sounded like a bad rant, its just some advice really, and if anyone has any input or discussion on it then great.

Well, at least there is combined cpu/gpu rendering now, so you can double your rendering speed by having both pile on.

octacore cpu with 16 threads is a whole lot of computational power for a mid tier gpu to be up against.

Not all is lost, a GTX 1070 should be of great benefit in 2.8 due to the new drawing code and Eevee (even if you use Cycles CPU rendering since it will help you preview the scene and help you run complex scenes smoothly).

Right. that is true, and i find i can just fire up another instance of Blender and carry on, and even do a render on the GPU whilst the CPU is rendering another project. I can even tell windows to use 15 threads for the first instance and give me one thread to work with for the other instance where im using GPU.

My understanding is you see the most difference with a fast CPU and a bottom feeding or up to a mid range GPU. A German friend has a top tier CPU & GPU and is seeing very little difference rendering with both. Whereas I with a respectable CPU and GT 730 card are seeing a nice decrease in render times.

But, who could have known. Hell I was saving for a 1070 myself since I already have a 500 watt power supply laying around. Now I’m cruising along with the i7-6700 as a happy camper. But, as you mentioned it still gives you a flexibility you wouldn’t have had.


It really depends on what you do, but GPUs often do outperform CPUs by a wide margin. (Have you adjusted your tile sizes?)

Its sad the way the GPU industry has not advanced since my 2012 HD7970 card which would easily hold its ground against this GTX1070. 5 years and no real advancement, who’d have ever thought.

That’s not true at all. Look at some benchmarks.

Anyways, turns out i lost a lot of money in cryptocurrency

Look at it differently: You paid for a lesson on how to invest your money wisely. A somewhat expensive lesson. You can still use your 1070 to mine some dogecoins!

sounds strange that the 1070 is outperformed in rendering by the ryzen… very strange. It is true that the ryzen is great for rendering, but are you sure you installed drivers correctly?

That is some cold shit excuse my French. Who the hell could have visualized this turn of events. And, who the hell among us hasn’t made a bad choice in hardware even after doing the homework. Damn computers are in a constant state of change and now it seems the software can also bite you in the ass.

@Thesonofhendrix, you did of course the same thing any of us would have just a short time ago. And, in another month I would have been plugging a 1070 into this machine with anticipation before this change in blender of course which no one saw coming. And, I would rather animate on a GPU since I have it my head they can’t take out the motherboard in a catastrophic event. And, a parting thought who knows what blender will bring in the future. Your 1070 might be a treat Blender Buddy. Take care in the UK and keep on Blendering while your 1070 once again comes around.


Your not rendering with the GPU using 16x16 tile sizes like with the CPU are you, as that would totally nerf the 1070 performance and make it seem worthless. Using auto tile size I’m pretty sure doesn’t really work.

Up the tile size to 256x256 to start with and test the 1070 again, I’d be surprised if the Ryzen was able to match it.

I’ve got a i7-6700K at 4.6GHz, which is no slow poke, but my 1070 leaves it in the dust.

ryzen 1700x its twice as fast as your core i7

Not sure I’d go that far, more like 60-70% faster in multi-threaded applications, given that my i7 is OC’d to 4.6Ghz.

Even so, that’s not the issue here, the issue is that the 1700x is up to 30% faster then a 1070 and in general I very much doubt that, I’d expect the 1070 to more likely be 30% faster.

The most helpful post is from beerbaron: look at the link to the benchmarks. A 1060 is faster than a ryzen 1800x on two of the benchmarks but slower on the rest. The ryzen 1800x tested was faster than a 1080 in only one of the benchmarks. It’s reasonable to expect the 1700x and the 1070 to be about as fast and have up to 30% speed difference, but which one is faster might depend on the scene.

I will have to benchmark the times, since i was only going on what i was seeing. The types of scenes and shaders have a big effect, and im thinking somehow SSS, volumetrics and transmission might be killing the 1070 performance… I always use auto tile size for the GPU, and 32x32 for cpu.

You can also try the daily build and use CPU+GPU rendering (if you’re not using denoising), should be a huge speed boost if you’ve got both a decent CPU and a decent GPU.

You can still use denoising, but you do need to dial the tile size up a bit to balance out the performance hit the denoiser takes from small tiles. 64x64 usually works best for me using cpu+gpu+denoising. You do lose a bit of performance from your CPU cores, but it’s only a few percentage points, certainly balanced by the overall effect of denoising.

With blender 2.79, where are you seeing/getting auto tile size under the Performance tab? I’m seeing Auto-detect for the Threads, but under the Tiles heading there’s only the option to hard set the size for X and Y and of course the tiles order.

Auto tile size is an add-on.

The question here is… don’t worry about this now that you can use CPU+GPU all together :slight_smile:


5820k @ 4.4Ghz 5:34 - 32x32

5820k @ 4.4Ghz 5:28 - 16x16

1700 @ 3.9Ghz 4:46 - 32x32

1700 @ 3.9Ghz 4:36 - 16x16

1700 @ 3.9Ghz 4:34 - 8x8

All at 256x256

GTX 970 3:55

GTX 970 3:48 CLI

GTX 970 3:41 no display

GTX 970 3:34 no display 1.55 Ghz

GTX 1070 3:09

GTX 1070 2:52 CLI

GTX 1070 2:48 no display

GTX 1070 2:37 no display 2.13 Ghz

GTX 1070 *2 1:29

Radeon Pro Duo 1:19 1.15 Ghz

GTX 970+1070*2 1:07

Radeon Pro Duo+GTX 970 1:06

The ryzen doesn’t get anywhere near a single 1070 or even a 970 in the bmw scene… If you have hair in your scene CUDA cards are known for being horrible at rendering hair

You should almost never ever render faster on your 1700x than a 1070 you are doing something horribly wrong, Keep in mind if you use the latest daily builds GPU has been optimized for small tile sizes ~32x32