GPU vs CPU rendering

Hi guys

I’m working on a GPU vs CPU rendering comparison which will be the subject of a podcast I’m working on with my colleagues. Every day I work as a support guy on a cloud render farm and I have my experiences with both CPU and GPU rendering (and I’ve seen experiences of our customers) but I’m very curious about your opinion.

If you use GPU render engines what are the reasons? The most common reason I’ve heard about is speed, but of course when you have access to more than one card or processor then I guess it’s rather speed\money.

What is interesting for me is are there any other reasons for using GPU render engines? Do they give a different, maybe cooler look, any features which CPU engines lack or type of subjects where they give better effects?

I’m very curious about your opinions and impressions when it comes to GPU rendering. What are the pros and cons?

By the way – when it comes to GPU unbiased engines – the GI and glossy reflections\materials highlights are computed as the same thing, right? For example, in CPU V-Ray GI and material reflections are visible on two separate layers\passes.

Only reason is speed. And if you want or need to use OSL shader or like this, you need to use CPU renderer.

CPU usually supports all features while GPU not always so it depends from render to render.
CPU limited by RAM which is upgradable to really large sizes, while GPU limited to VRAM thru now in some renders they can access RAM in addition, but it also makes them go slower.
GPU usually faster and it’s more easy to stack them, but that also depends, Threadripper 3990x with it 64 cores does beat single TITAN RTX

The ability for a GPU to handle complex and advanced shading and lighting techniques is a lot better than it was when Octane first hit the market (because of new GPU architectures and more advanced compilers), but it is still a bit limited in the area of memory. The memory situation though has improved with new mGPU tech. allowing two cards to share VRAM.

CPU rendering meanwhile is advancing at its fastest pace in over a decade because of renewed competition (up to 16 cores and 32 threads now for non HEDT platforms), this has taken some of the wind out of the GPU hype.

I’ve been looking at render speeds and comparing my 980ti to Threadripper 1950x.
I have been using Iray for about two years, and was used to the CPU being about a third slower; this, however, is not the case with Cycles. they are comparable - with the 980ti just about edging it; with 2.83 (beta), the CPU is slightly quicker, certainly with Adaptive Sampling turned on.
2.83
980ti
512 x 512 2:31.21, 2:31.13, 2:27.25, 2:27.98

Threadripper 1950x
32 x 32 2:07.48, 2:07.45, 2:08.10, 2:04.80
64 x 64 2:06.42, 2:05.64, 2:07.81, 2:09.05

Those were with my own test scene, but the same held true for Mike Pan’s BMW test.

It’s made my decission about upgrading my graphics card for faster rendering speeds, more of a challenge; a GPU, there is always the chance a given scene either wont fit on the card, or takes additional work to get it to do so.

Hi, interesting, do you try GPU+CPU?
As Cycles GPU work fine with smaller tiles you should nearly get double performance at 64x64.
I guess you need a RTX card to use Optix in Cycles but a 2060 should beat your 980Ti with Cuda ,too.
Cycles can use system RAM with out of core render, it is slower but heavy scenes can work on GPU.

Cheers, mib

64 x 64 2:34.25, 2:30.27
Yes, I did try both - just a couple. I’d sooner not use CPU unless no options as I’d sooner replacement the graphics card if it goes than the CPU. There was quite an improvement when using both.
I’ve been intending upgrading for years, but pretty sure I will this year. I had been intending to get a Titan, but moved house instead; that delayed things and now I may decide to go AMD. I don’t really like paying the Nvidia Tax, unless I get corresponding performance gains.

if you go with AMD you LOSE - cuda, optix rendering and denoising… and not just that…

Agreed there could be, but I’ll wait to see what AMD do when they introduce RTX. The have a strong track record of being good in compute and of offering better value than Nvidia.
I can wait.

this is show - how much easier release function on what platform!
https://blenderartists.org/uploads/default/original/4X/d/b/1/db144a4fe28463803593f1d80cf0dcd0a2ed5894.jpg
what do you all think? Why is it easier for developers to develop and develop under CUDA and then at least how to finish at least something for OpenCL - do you really think that this is simply so? Just because they want CUDA so much and no OpenCL - and can you ask Brecht a question if there is a possibility in private that he could say everything he thinks about OpenCL, AMD …
You can ask on luxrender forum from devs - why they after so much year add CUDA to Luxrender :slight_smile: ?

yes - very strong track record…
even in luxrender now CUDA faster than openCL
1- Octane - cuda ONLY
2- Fstorm - CUDA ONLY
3- REDshift - CUDA ONLY
Cycles
AND
AND
Even Marvelous Designer simulations say NO to AMD
and so on…

how from this appear “AMD strong track record”…i do not understand…at all

Monopolism is not a good situation and international standarts are very important part of the programming world.

Nvidia tries be monopol, not supports international standarts and you support this.But programmers hate who not supports international standarts like Windows, DirectX, Nvidia etc.

Nvidia cards too expensive and not enough good than AMD GPUs.

If you want, use Nvidia cards, I used too, but never blindly defend and not be fanboy.

2 Likes

Don’t spread misinformation. Your GPU-CPU feature comparison is false.
Here is screenshot from official Blender Manual:


https://docs.blender.org/manual/en/dev/render/cycles/gpu_rendering.html

instead of writing nonsense, read ONCE AGAIN what I wrote. This is a demonstration of how easy and fast technologies are implemented.

lol, how they hate? if they choose - CUDA ONLY- and it’s not me who came up - THIS IS A FACT
again

even in luxrender now CUDA faster than openCL
1- Octane - cuda ONLY
2- Fstorm - CUDA ONLY
3- REDshift - CUDA ONLY
Cycles
AND
AND
Even Marvelous Designer simulations say NO to AMD
and so on…

How?

Example, you works 8 hours a day. Do you want work 16 hours a day? Because you must make same things twice or three times. You must learn two, three, four language. You must specialize on two, three, four system. You must spend too much money for develop a software. Why, because they want be monopol.

NO
as you can see they choice CUDA (im my post unlike yours, there are links confirming what I say)
even in luxrender now CUDA faster than openCL
1- Octane - cuda ONLY
2- Fstorm - CUDA ONLY
3- REDshift - CUDA ONLY
Cycles
AND
AND
Even Marvelous Designer simulations say NO to AMD
and so on…

they made a choice (already!) for a reason - you yourself can ask GPU programmers why - why they do not care about AMD and openCL - and you learn many sily things about openCL
lol, why am I writing all this to nowhere . in short - there are links confirming what I’m saying.
Dot.

Are you a programmer? If not, then you never understand.

why they do not care about AMD and openCL

Because supporting multiple systems and writes portable software is hard, time consuming work and requires more money, more programmers, work harder…

Yes, pragmatically speaking, "Blender must code to “open standards.”

Otherwise, it could never possibly get any work done …

As of now, everyone who uses Blender should be expected to have an Nvidia card if they want the best experience. Radeon can be made to work in Blender, but it always takes a bit of time and effort on the devs. part. Supposedly we are seeing AMD get better at quality though, so RDNA 2 may actually become a viable option for creators.

For viewport denoising though, the devs. are working to get the CPU-driven OIDN working for that at least, so they are definitely not in Jensen Haung’s pocket.

1 Like

Even if AMD cards “can be made to work in Blender” we still use other applications…
3d industry is basically locked to Nvidia and AMD aren’t really doing that much to change it, even their OpenCL implementation for Cycles is horrendous. RDNA2 might change something, but I don’t have any hopes.

2 Likes