I’m currently using 2070 in my PC but I’m considering adding my old 1060 6GB as a second GPU. Can blender somewhat benefit from two cards at the same time?
Sure, should work great. If the 1060 is only a 3GB version that would be a problem because you’d be limited to 3GB scenes when rendering on both cards at once. A 6 GB 1060 though should be less of an issue. You can always tell Cycles to just use the 1070 when you need the largest GPU memory.
Yes that’s no problem.
Edit: Hehe, simultaneously.
I run my old 1050 along with my 2070. Obviously I can only use both on smaller scenes. For anything more complex I just go into the settings and disable the 1050. Anything to get a bit of extra speed is welcome.
Do I miss something here ? . Putting in a second card is a waste of power and space inside the computer. You are better of with just the 2070. For this to work you need 2 2070 with SLI, and also a lot of internal ram. I have edited my first post, the computer and blender can use both card, but it is a waste.
How do you come to this conclusion?
Edit: I even used to use a AMD Vega 64 and a NVIDIA 1050ti at the same time.
Cycles is happy to use as many GPUs and CPUs as you give it and it scales very effectively. Blender interactive OpenGL performance will of course be limited to whatever GPU you’re using for the Blender UI.
But will Blender benefit from it?
here is something a Blender user said: " I have 3 cards, 2 cards scaled with a factor of 2.0, now the 3rd card scaled something along 1.7-1.9 factor. So 2 cards make a huge difference in cycles. You will for sure cut the render times in half.
All articles I’ve read suggest not using SLI for apps using CUDA/Compute. If you’re on Windows you can via drivers enable it or disable it. It’s currently not working under Linux AFAIK.
So turn it ON when gaming, turn it OFF when rendering with Blender."
So why did he say that?
SLI makes two identical cards work as one, which is great for gaming but for rendering you want SLI disabled so the two cards are separate and can be doing two different things (rendering different parts of the image) at the same time.
Cycles will distribute the buckets/tiles that make up the entire image to the different devices you give it, and even if one card is 2x the speed, there are lots of buckets to be worked on so in the end the fast card will have done 2x as many as the slow card, but the render still finishes 50% faster.
I don’t know how you interpret that as not working.
Of course you won’t double the render performance with different cards. But each one will render as fast as if it where the only card in the system.
In regards to SLI I don’t know the exact technicalities but it somehow negatively impacts rendering.
Regarding Linux: I am rendering twice as fast with my 2 RTX 2070 as I do with just one on my penguin.
Edit: See above for SLI. Easy and logical. Know I know that to.
I might mix a little here, SLI and non SLI. with SLI you need two 2070 to make it work.
With a non SLI setup, it is probably possible, but Blender need to render 2 seperate part of the scene. so it will be like network rendering.
Not really, you are just rendering two tiles at the same time.
I disagree here, when you say 50 % faster, it is mathematically impossible.
And why is that?
well lets say that the 2070 uses 5 min to render the scene alone.
The 1060 will render the scene in 15 min. So spock where is the logic?
Cycles (unless you’re in progressive refinement mode) renders the scene in lots of little pieces, one tile/bucket at a time. As each compute device becomes idle it asks the renderer for a new chunk of the image to work on, so yes, it’s sort of like a “network render” in that you have different hardware all working together on different parts of the image, but this is nothing special and it’s how Cycles always works. When you hit F12 those little squares you see are one for each hardware compute thread available, which in recent Cycles can be one for each CPU thread and one for each separate GPU device in your system.
The relative speed of each device doesn’t matter much because the faster devices just end up consuming more work units (tiles) and thus do more of the total work, but the slow devices contribute proportionally to the total image.
So having 2 fast GPUs renders about twice as fast as one, and having one fast GPU and one GPU that’s half the speed will still get you a 50% speed boost.
Let’s say the 2070 computes three tiles per second and the 1060 computes one tile per second. There are say 1,000 tiles needed to complete the image, so the 2070 would take 1000/3 seconds and the 1060 would take 1000 seconds and the two cards together would take 1000/4 seconds to do all the required work.
If someone could prove this by testing it, it would be awesome.
Literally everyone who has more than one GPU does this and it works great. In modern (2.80 / 2.79 experimental) Blender, you can use your CPU and GPU at the same time in Cycles and while the faster GPU probably does most of the work, the CPU threads can complete a fair number of buckets on their own thus making the render significantly faster just like having a bunch more slow GPUs would.
I don’t understand how this will give you 50% faster render. If you had 2 identical cards, or just as strong, then you would have achieved 50% faster runs