I am wondering if there is someone that has had 3 Radeons rendering away on a scene and done a comparison between the three and a single 1080ti ?
The reason I am asking is that I can purchase 3 new rx590 for the price of a 1080ti… and if the 3 Radeons combined are faster than the 1080ti by itself I will obviously use that for my new rendering back-up machine.
btw I am aware of the differences between CUDA and OpenCL and the ram capacity of these cards. (edit: …and the power consumption)
(edit: I don’t know how this happened, but my topic ended up in Paid work… weird.)
No worries, i’ve moved it to #support:technical-support for you
On Blenchmark, the rx 570 series apparently renders faster than the 1080ti by one second. I would imagine 3 rx 590s would definitely be faster.
holy cow… I mean… Thanks
questions and prayers are answered
Can your motherboard and psu hold 3 gpus?
MSI X470 and Corsair HX1200i (even though the system I am building isn’t even going to be close to pulling anything close to 1200watts).
Which MSI X470? There are several X470 models. The psu is ok, but with the motherboard you would be limited if there are several devices installed already and pcie lanes are shared. Just check if the last pcie slot (the bottom one) has a x4 capability.
I am open for suggestions
You mean that this system isn’t already built?
You mean, why asking about hardware in the first place, when one can build it first and then realize it doesn’t perform?
I guess I could have done that… but I didnt.
Sorry, I don’t mean to be rude or something. Just trying to help out. Still, I don’t get it. You already have the MSI X470? If you do, how are you open for suggestions? You mean, you could change it if it doesn’t fit?
I’m a bit puzzled…
You can also look at the offical Blender Internal tests. (click on legend to hide a device)
It looks like GTX 1080ti is only 2x that of rx580. with rx590 having higher clocks, 3 of them easly outperform GTX 1080ti in rendering.
Is the extra GPU RAM even necessary anymore on Cycles? As I understand it, the regular RAM is used now with the GPU. If that is the case then two RTX 2060 will outperform the 1080Ti.
Edit: The development for AMD cards are not as fast as the nVidia side of things in Blender. CUDA is just better than OpenCL
@birdnamnam No worries mate, appreciate it. Technically if you have any suggestions on what board to get that is better than what I have on my watch-list, I would like to know about it.
@BigBlend I was under the very same impression in regards to the GPU no longer being the limitation due to dual-wielding a CPU along with GPU for rendering …but I put it to the test and rendered a scene larger than 8GB on two Opteron 16 cores (at 64 GB, limited to 32gb due to windows server license) and a 1050 ti 4GB (that is the max on GPU a HP dl385 G7 can power) combined and I suddenly got a “CUDA out of memory”… and previously with a scene under 8GB it didn’t do that. So the idea that your CPU ram is the limits and prevents CUDA API from throwing an “out of memory error” is not the case. Btw I’ve used Blender daily build 2.79.6 or something on that, when testing it.
And in regards to OpenCL and AMD GPUs … I have done some research an stumbled across the Vega 64 … and Frontier if that rings a bell.
Let’s make a quick example: RTX 2070 vs 1070 …an improvement of 5 fps only? over the course of what… 3+ years… Nvidia didn’t do all that great in their chipset development.
Don’t get me wrong I appreciate the Nvidia GPU and CUDA (always been my favourite)… but Vega 64 and OpenCL seems a little more affordable and hassle free, now to this point (after spending a sleepless night over it, haha)
edit: either way, I have decided to toss 2 Vega 64 s in my new system and blow an overpriced 1080 ti out of the water… way to go