Rendering with CPU or GPU

I’m new to 3D modelling, I’ve started to use Blender a few months ago. Currently I’m rendering on CPU (i5-3570, 4 core, 3.4GHz) which feels very slow so I was thinking about upgrading my PC. Would rendering on GPU be much faster than rendering on my current CPU? I have budget for something like GTX750 Ti or maybe GTX960. I’m not doing any gaming on this PC so if it only gives a small boost to the rendering speed – like 2x – It would be a waste of money for me. Should I buy a faster CPU instead?

I’ve read somewhere that if I render on GPU I can only render scenes that fit into the card’s memory. Is that correct? Will that change in the future?

Background info: I render with Cycles, I use Linux, I mostly work on large scenes but with few or no textures, and please treat me like a clueless newbie:). Thanks!

Hello. Everything depends the CPU and GPU you are comparing.

I have a EVGA GTX 960 4gb SSC and intel i7-3770:
http://www.blenderartists.org/forum/showthread.php?375718-Cycles-GPU-CUDA-slow-with-some-materials

Overall, my GPU is about twice faster than the CPU. I give an example in case my bad English: If I get two minutes with my CPU, I get 1 minute with my GPU :slight_smile:
If the scene contains volumetrics, hair system or some of these hard materials such as those mentioned in that thread, then the time between my CPU and GPU are paired.

To get an idea of ​​the times with your current hardware, you can use these two .blend files that I shared in the previous thread, then compare with my time (In the message #4 I mention the times of the second scene)

If you think buying a Nvidia GPU, you think in something with 4GB vRAM or higher.

Thanks! Ive been looking at the benchmark results and thinking hence the late response.
The BMW scene took my i5-3570 CPU 5:26 minutes. YAFU: the “material scene” took 51 seconds. What were your results?
From the benchmarks it seems that with the GTX 750Ti the render time is something between 1 and 2 minutes. That would be almost 5x faster than my CPU which is nice. And if my logic is right, later as an upgrade I can buy an other one and they can work parallel.

Aliii, I know the i5 3570 isn’t the newest, but a 750Ti being 5 times faster seems a bit of a stretch?

Have you set the correct tile size for you CPU (32x32)?

In my rendering tests, my GTX 960 (2 GB) renders roughly 4-5 times faster than my i5-4590 @ 3.7 GHz. Though, currently I very highly recommend a 4 GB card (quite frankly, may as well make the jump to a 970 at that point).

@Aliii, What is the “material scene”? If you mean some of these scenes I have shared in the other thread, there I show my times (post #1 and post #4 for second scene).

The GTX 750 Ti has only 2GB of vRAM. You can use it for simple scenes, but in more complex scenes you can not use it because you get “out of memory” error CUDA:
https://developer.blender.org/T43310

Perhaps in the future CUDA use a little less memory, but maybe not. Who knows…
So, you try to buy something with more vRAM.

Edit:
With the EVGA GTX 960 ssc 4GB, I get 2:11.63 with BMW27 (two BMWs) scene, Blender 2.75, 480x270 tile size.

Be careful with BMW benchmark. In the same thread are shown the times obtained with the old BMW scene showing one car, and then the new BMW27.blend scene showing two cars. And maybe in the spreadsheet the obtained times are mixed. With this new scene “BMW27.blend” the GTX 750 Ti can not give a time equal or minor to 2 minutes. Perhaps these results you’ve been seeing were used two GTX 750 Ti at the same time.

Edit 2:
Oh, I’m seeing again that BMW thread and it is a disaster. Times are mixed between the new and the old scene. Better you see here in this spreadsheet results for benchmark with Sponza scene, Blender 2.73:

And again I clarify: you do not get confused when they are used more than one card at the same time. You see the results in which only one card is used in the first column (GPU1).

I mean the “slow_cycles_gpu” one in the first post.

OK, now that I look at it more carefully I see that with the 750Ti the render time is above 3 minutes. Thats a bit different:). (I’ve seen one result on page 97 from “dns_di” with 2:19 but that says “GTX 750 Ti 2 GB factory OC”)

Now that I looked at it again it’s more like 2x faster. The test said to change no settings so I left it at the original 128x128. With 32x32 it was 5:14.
So my results for the BMW27 model:

i5-3570 @ 3.4GHz ( / Ubuntu):
Blender 2.75:
5:26 - 128x128
5:14 - 32x32
Blender 2.73:
5:19 - 128x128
5:10 - 32x32

What are your results with the BMW27 model? For the GTX960 I’ve seen results around 2:35 which would be about 2x faster than my i5. (And YAFU now says 2:11) Based on this 4-5x faster than your 3.7GHz CPU sounds unrealistic.
Again, I have no idea how textures / poly-count / materials affect the CPU-GPU time ratio so feel free to educate me:).

I found it useful but probably a new thread should have been started with the new model and with some uniform way of listing the GPUs. …and maybe indicate the card manufacturer.

I’ll get the BMW results and exact config when I get home. Though it’s worth noting the 960 (evga ssc) has just shy of 2x the compute resources as the 750 ti, and maintains ~1440 MHz during a render.

If it weren’t for some gaming on the side, I’d have probably just skip the gpu altogether and went for a six-core i7, then get a capable gpu down the road.

Got the new BMW blend (2 cars), ran it, here are my results.

GPU: 2:32 (tile size: 128 x 128)
CPU: 6:28 (tile size: 128 x 128)

CPU: 6:06 (tile size: 32 x 32)
GPU: 2:08 (tile size: 256 x 256)

Specs:
Core i5-4590 @ 3.7 GHz (stock cooler)
8 GB DDR3 1600 MHz XMP
EVGA GTX 960 SSC 2 GB (GPU boost tops at 1440 MHz stock)
Radeon R7 128 GB SSD
2 TB WD Green HDD
Win 7 Pro
Blender 2.75

It is probably the Windows OS that is giving me worse results on the CPU render. Under Windows, best settings used for each, the GPU seems to be close to 3 times faster than the CPU based on these results. For a Windows user, a GPU makes a strong case for itself. Arguably, a GPU is also far faster in the viewport rendered mode.

I have a i5 3Ghz processor with two GTX 750 TI cards. The most benefit when render with gpu is that it lets cpu resources free and allowes me to do other things, like surfing when making few minutes material test render. I estimate one GTX750ti is about twice as fast as i5, and with 2 its about four times faster.

And when making complicated materials (more than just diffuse+glossy+fresnel), it takes time to tweak and test. For me the Blender and everything seems more responsive and everything with gpu.

Yes, GTX750ti has only 2Gigs of vram, but it hasn’t really stopped me of doing anything. SSS is almost every time a failure, but then again I have heard it is really slow atm on gpu, at least on cuda.

For production level rendering, I would do it overnight with gpu if possible cpu otherwise, or in Amazon cloud with cheap 8core cpus.

On the other hand, if I were to buy a gpu now, I would look for Radeon R9 390 8Gig cards, for about 420€.

Thanks for the help! To be honest I though rendering with GPU would be faster than this. I’m on a budget so I can’t buy a new CPU or a proper GPU now. Maybe what I’m going to do is to buy a 750Ti anyway - Blender being more responsive and 2x speed is still not bad - and later maybe buy an other one or something better as a second GPU. Cards that render about 4x faster than my i5 have at least twice the price of the 750Ti so I don’t lose much with that. That’s my logic at least. (BTW the 750Ti has 4GB versions.)
My questions here: what if I have two cards both with 4GB memory. Is that the same as having one card with 8GB or will I be still limited to that 4GB? And if a GPU renders 2x faster than the CPU, is that the same for both the “F12 render” and the preview render? And is it OK to let a GPU like the 750Ti to render for like 2-3 hours?

For this reason I sometimes set the render to use 3 cores only:).

hey guys im having a bit of confusion and i cant figure this out
rendering ANYTHING is faster on my CPU (auto detects 4 threads) than my GPU (only ever uses one thread)
everything i find tells me this should be faster
im using a GTX 750 what could i be doing wrong?

GPU rendering shows one thread per device. One graphics card = one thread.

Do you have a test scene for use to examine?

Anyway, to get the best out of CPU and GPU, you will have to adapt the tile sizes accordingly (Render > Performance tab):
CPU rendering = works best with small tile sizes (e. g. 32 x 32 or even 16 x 16)
GPU rendering = works best with large tile sizes (e. g. 256 x 256)

If you happen to have a beastly CPU and a low end GPU, it’s perfectly possible that CPU rendering is faster.

I have always rendered with the cpu (i7 4790k - hd graphics 4600), but yesterday i bought for a good price a gtx 750 ti 2GB. There is no difference between both, and in most cases the gpu rendering is slower than the cpu. Is that accurate? may i’ve been doing something wrong?

I7 4790k
Gtx 750 ti sc
16 gb ram
win 7

I took a quick look in the 2.7x Cycles Benchmark Thread, and there was one user with both a 4790k and a 750ti that reported a difference of only one second between the two of them. If that’s accurate, it’s definitely possible that the GPU would be slower at certain tasks.

Sorry to be the bearer of bad news, but unless anyone else has any ideas, you might not be getting any better performance with that particular card.

Not every scene is going to be faster to render on the gpu. Some can render significantly faster on the cpu even if you have a good graphics card
Some cpu vs gpu test scenes and render times https://code.blender.org/2016/02/new-cycles-benchmark/