I’m working on a GPU vs CPU rendering comparison which will be the subject of a podcast I’m working on with my colleagues. Every day I work as a support guy on a cloud render farm and I have my experiences with both CPU and GPU rendering (and I’ve seen experiences of our customers) but I’m very curious about your opinion.
If you use GPU render engines what are the reasons? The most common reason I’ve heard about is speed, but of course when you have access to more than one card or processor then I guess it’s rather speed\money.
What is interesting for me is are there any other reasons for using GPU render engines? Do they give a different, maybe cooler look, any features which CPU engines lack or type of subjects where they give better effects?
I’m very curious about your opinions and impressions when it comes to GPU rendering. What are the pros and cons?
By the way – when it comes to GPU unbiased engines – the GI and glossy reflections\materials highlights are computed as the same thing, right? For example, in CPU V-Ray GI and material reflections are visible on two separate layers\passes.
CPU usually supports all features while GPU not always so it depends from render to render.
CPU limited by RAM which is upgradable to really large sizes, while GPU limited to VRAM thru now in some renders they can access RAM in addition, but it also makes them go slower.
GPU usually faster and it’s more easy to stack them, but that also depends, Threadripper 3990x with it 64 cores does beat single TITAN RTX
The ability for a GPU to handle complex and advanced shading and lighting techniques is a lot better than it was when Octane first hit the market (because of new GPU architectures and more advanced compilers), but it is still a bit limited in the area of memory. The memory situation though has improved with new mGPU tech. allowing two cards to share VRAM.
CPU rendering meanwhile is advancing at its fastest pace in over a decade because of renewed competition (up to 16 cores and 32 threads now for non HEDT platforms), this has taken some of the wind out of the GPU hype.
I’ve been looking at render speeds and comparing my 980ti to Threadripper 1950x.
I have been using Iray for about two years, and was used to the CPU being about a third slower; this, however, is not the case with Cycles. they are comparable - with the 980ti just about edging it; with 2.83 (beta), the CPU is slightly quicker, certainly with Adaptive Sampling turned on.
2.83
980ti
512 x 512 2:31.21, 2:31.13, 2:27.25, 2:27.98
Threadripper 1950x
32 x 32 2:07.48, 2:07.45, 2:08.10, 2:04.80
64 x 64 2:06.42, 2:05.64, 2:07.81, 2:09.05
Those were with my own test scene, but the same held true for Mike Pan’s BMW test.
It’s made my decission about upgrading my graphics card for faster rendering speeds, more of a challenge; a GPU, there is always the chance a given scene either wont fit on the card, or takes additional work to get it to do so.
Hi, interesting, do you try GPU+CPU?
As Cycles GPU work fine with smaller tiles you should nearly get double performance at 64x64.
I guess you need a RTX card to use Optix in Cycles but a 2060 should beat your 980Ti with Cuda ,too.
Cycles can use system RAM with out of core render, it is slower but heavy scenes can work on GPU.
64 x 64 2:34.25, 2:30.27
Yes, I did try both - just a couple. I’d sooner not use CPU unless no options as I’d sooner replacement the graphics card if it goes than the CPU. There was quite an improvement when using both.
I’ve been intending upgrading for years, but pretty sure I will this year. I had been intending to get a Titan, but moved house instead; that delayed things and now I may decide to go AMD. I don’t really like paying the Nvidia Tax, unless I get corresponding performance gains.
Agreed there could be, but I’ll wait to see what AMD do when they introduce RTX. The have a strong track record of being good in compute and of offering better value than Nvidia.
I can wait.
Monopolism is not a good situation and international standarts are very important part of the programming world.
Nvidia tries be monopol, not supports international standarts and you support this.But programmers hate who not supports international standarts like Windows, DirectX, Nvidia etc.
Nvidia cards too expensive and not enough good than AMD GPUs.
If you want, use Nvidia cards, I used too, but never blindly defend and not be fanboy.
Example, you works 8 hours a day. Do you want work 16 hours a day? Because you must make same things twice or three times. You must learn two, three, four language. You must specialize on two, three, four system. You must spend too much money for develop a software. Why, because they want be monopol.
they made a choice (already!) for a reason - you yourself can ask GPU programmers why - why they do not care about AMD and openCL - and you learn many sily things about openCL
lol, why am I writing all this to nowhere . in short - there are links confirming what I’m saying.
Dot.
As of now, everyone who uses Blender should be expected to have an Nvidia card if they want the best experience. Radeon can be made to work in Blender, but it always takes a bit of time and effort on the devs. part. Supposedly we are seeing AMD get better at quality though, so RDNA 2 may actually become a viable option for creators.
For viewport denoising though, the devs. are working to get the CPU-driven OIDN working for that at least, so they are definitely not in Jensen Haung’s pocket.
Even if AMD cards “can be made to work in Blender” we still use other applications…
3d industry is basically locked to Nvidia and AMD aren’t really doing that much to change it, even their OpenCL implementation for Cycles is horrendous. RDNA2 might change something, but I don’t have any hopes.