Blender - CPU or GPU - which to invest in?

Now that Cycles is so well done when it comes to GPU rendering (both OpenCL and CUDA), does it even make sense to invest in heavy threaded CPU setup? (Intels I9 or AMDs Threadripper)

An RX 480/GTX 1060 has more rendering performance then current top end desktop CPU’s. Getting two or more of these GPU’s or even going for higher tier GPU’s makes even top end CPUs not cost effective.

So, where do you stand and why.

CPU? GPU?

For myself:

GPU (dual or more). more cost effective and probably power effective then top end GPU. Don’t see any functionality where GPU can’t handle (except for high memory events)

With eevee coming with 2.8 I would take gpu for sure.

Well… for rendering alone yes… but for other tasks like simulations… CPU in general.

Cheers.

A CPU with high Single-Threat-Performance (i7-7700K) and 1, 2 or more GPU for Rendering (RX580 dir example).

Moved from “Latest News” to “Blender and CG Discussions”

Correct me if im wrong which i could be, but i do not thing GPUs OpenCL or CUDA can handle volumetric lighting and volume absorption/volume scatter shaders used for smoke or liquids. They are limited by their VRAM which almost always is of a smaller size than system RAM. Power wise they suck up more wattage than all CPUs, if you are looking at midrange or high end GPUs that is. And according to the benchmarks i’ve looked at on blenchmark, the GPUs are not magically faster than heavy threaded CPUs like Threadripper or i9s will be. The only thing that beats a 16 core threadripper is a 1080ti , which is limited by its 11GB ram.
Correct me if im wrong and link to the proof thanks… Because i to am wondering what i should invest in CPU or GPU for my new system…
My current ivybridge system with 650watt PSU could not run the 1080ti that i tried in it. Not enough free resources on the PCI bus or something.

Yeah, this is exactly what I did. I got a cheap-ish intel quad core 7700 3.4 GHz. and two GTX1070’s. The higher clock speed is better for single threaded simulation work while the dual GPUs take care of rendering.

That may indeed be true but isn’t the cost of those CPU’s compared to a 1080ti much higher?

Also, why exactly do you say GPU’s aren’t good for volumetric work? I’m currently working on a job that uses smoke in the scene and it renders much faster on my two 1080ti’s at work than on our render farm.

When you start getting into “stupid” number of cores and RAM for a desktop (arbitrarily somewhere around 12+ cores and 48 GB) then you aren’t getting a whole lot of benefit out of them - anything more than the first half-dozen or so will be sitting idle for everything other than rendering. A decent graphics card, however, will be used by pretty much everything not just GPU rendering. Maybe not with a second (third, fourth…) GPU though. You really need to work through how much you really need vs. want vs. how much you want to spend.

Like i said i could be wrong, i don’t know all the data, all i know is that volumetric shaders do not work on OpenCL last time i tried. Maybe they have now accomplished this in CUDA? How about volumetric lighting?

GPU can’t use multiple importance volume sampling or the different volume interpolations (like cubic).

I will agree to a point that it’s highly recommended to get a good GPU in the near future (as Blender 2.8 will be making good use of it more than 2.79 for general production).

However, for rendering, you still might have the issue of VRAM limits and how certain features and optimizations can’t be used due to the limitations of the GPU architecture. Also, with the tendency of new features to possibly cause sudden spikes in memory use and major slowdowns in performance, a new feature just committed may not just simply work as what you see with CPU rendering (even though it’s getting better with new API versions and the split kernel work).

Though even the memory issues may be resolved in time with the emergence of technology that allows you to stack memory with multi-GPU setups and the GPU able to use system RAM (but Nvidia currently has that only available for their Pascal arch. high-priced professional cards, and I’m not sure where AMD is on this).

Unless you’re a hobbyist or on a tight budget, I don’t see the point in not taking advantage of both manycore CPUs and high-end GPUs. Both devices have tradeoffs, neither is better than the other in general.

Memory “stacking” has always been possible with GPGPU APIs and it has never really had a use for raytracing. You need the scene data resident on both GPUs for best performance.

Using the system RAM also has always been possible. It has merely become much more convenient to use and potentially a bit faster, but the bandwidth you have over PCIe is still magnitudes slower than on-device memory. Even faster interconnects like NVLink (not happening on x86) are only going to ameliorate this problem.

คาสิโนออนไลน์
[url = https: //www.gclub28.com/] [color = # 262626] Gclub [/ color] [/ url]

Im probably going with a threadripper and no GPU/ potato GPU for my new system. Reason being i hate having to sacrifice any features in rendering such as multiple importance and volume sampling etc. I hate being limited in memory because it goes above GPU VRAM. And finally because to get a fairly large VRAM size you are forced upto 1080ti, which is £700 anyway, only has 11GB still. Also i have to weigh up other uses my CPU will have such as video rendering, and running other apps like MB3D and Ultra Fractal, which i personally like to use and are CPU only and scale with cores/threads… I have grown out of gaming, and even if i wanted to get back into gaming i would buy a console and a cheap 4K tv and that would be ideal for my gaming needs…
As for Blender 2.79 and such i would like to know how that will be advantageous for GPU. Im not interested in that eevee thing it looks a bit rubbish and all it does is make the viewport look better, but whats the point when you’re after a final render that uses full raytracing and sacrifices nothing in quality. If you’re into game development then the eevee thing might well be a good feature though. Again i could be wrong with all this correct me if im wrong.

Thanks for the conversation here.

I know that it will also depend on the scenes you are creating. I for example model and render space ships. So there I probably will get more value from multiGPU with 8GB+ memory (like rx480/580) and upcoming VEGA (if priced correctly)

I already have dual xeon setup, but seeing how the threadripper pulls out Cinebench results … (16c/32t : dual xeon - 2100, Threadripper - 3000) Nice boost at lower power.

Still for now primary investment in GPU’s only.

Threadripper 1950x = 3151. Cinebench R15. http://www.cpu-monkey.com/en/cpu_benchmark-cinebench_r15_multi_core-8 but those results have been up for weeks now, and could well be on early firmware. I think theres room for improvement.
The top of the line thread ripper 1998X scores 3416.
Do you not see these threadrippers being close to the GPUs in speed?
Also its tempting to invest in the Threadripper for its 64 pci lanes… Leaving room for upto 6 RX480s further down the line when you can afford to upgrade, or of course even newer GPUS.

Interesting site.

Though there are like 8+ Threadripper CPU’s that I’ve never heard of like 1998x with a score of 3400+…

When it comes to room for upto 6 rx480s. that is a problem so far, all motherboard makers are releasing at max 5 slots (with gigabyte Auros). I’m eager for someone to release a 7 PCIE x16 slots that would sure be cool.

As for rx480. im now eyeing the rx vega 56, with 210W and nearly 2x performance over single rx 480… well dreams be dreams.

Still core question CPU vs GPU… seems there are still key aspects such as volumetric/simulations where CPU is a must. For regular rendering a GPU with 8GB or greater should be sufficient for most notice users like ourselves?

Can someone give me an example of a scene that would not fit into the 8GB? I just want to judge complexity to determine where I sit with my work and be able to better judge on the CPU/GPU investment ratio.

This point is honeslty hard to answer. But looking on CPU/GPU in Blender, where right now single RX 480 outperforms my dual xeon e5-2687w (v1 mind you)… Even threadripper with 50% better cinebench (dual xeon - 2100 :: Vs :: threadripper 3000), my single rx 480 would still outperform it. And getting second rx480 is much easier then second threadripper cpu (at this point).

I’ll do some render tests using the 6 scenes on blender.org and post the results here for comparison this week.

Hence the “conundrum” of where to invest funds. Agreed that this is specific to each person, but I’m also trying to get this conversation going to give others some insight/guide on what to do.