Nvidia 2070 and 1060 at the same time?

Ah I see. Well 50% might not be right numbers because that would be case for two identical cards. But let’s stay with your example. If a 2070 will render a scene in 5 min. on it’s own, then a second card will do some of the job at the same time. Therefore shorter render time. If we say that a 2070 is twice as fast as a 1050 we could say that after the 2070 has rendering half of the the job the 1050 already did 25% of the whole picture. You rendered 75 percent of the picture in the same time you would have 50 with only the 2070. You see where this is going?

OK , I dont say that you are wrong, but I am unable to understand the technical math in it. I will have to test this. I am going to put in another GFX card I have, to see if this will be of any use :slight_smile:
I will then come back with som cold facts. :wink:

Shall we assist in enabling both cards in Blender or do you actually know where to look for that? Although I believe Blender will detect and set the devices automatically for you.

Woa, thanks everyone for great feedback. I didn’t expect this thread to blow like it did. I’d test it too if I had a motherboard with 2x pcie x16 slots. Let me know the results because the numbers and percentages you guys put really makes me giggle.

You don’t even need two x16 slots. A x 16 and a x8 will do fine. Once the data is loaded into VRAM the card will do it’s duty.

Damn. You all make me look like a noob. I’ll have to try that! Thanks for the tips;)

On most boards with 3 slots the PCIe lanes are shared between the 2nd and 3rd slot anyway…

Edit: Don’t feel embarrassed. Nobody can know everything. It’s complex stuff we Blender users are dealing with.

1 Like

True, that never caught my attention. What about the drivers?

Just use the newest driver for your 2070. It will handle the 1050 just fine.

Edit: Sry 1060 ofc

2 Likes

I will have to find some extra PCI power cables for the other card. Yes I think I know where to enable the card. They should show up under system cuda

Exactly. Have fun and don’t forget to report your findings.

If you’re using 2.79b, be sure to also give it a try with 2.80 or a current 2.79 experimental build to get the latest Cycles version, and there use small tile sizes (like 16x16 even). And you can try enabling your CPU along with your GPUs as a CUDA compute device too!

For the sake of completeness:
You can get more elobarate with a ‘different-kind-of-cards’ setup.

When dealing with animations you can start two instances ( preferably headless because it’s faster anyway) of Blender and assign just the right amount of frames to the lower end card to avoid the stronger one going idle when waiting for the other one to complete tiles on the same frame.

Kind of a boxed sentence but I hope the message came across.

I am double posting this because I don’t feel it was a fit for an edit.
Now all of you have fun with faster rendering and doing some math :wink:

OP has to use a non-stable version anyway since RTX cards ‘shouldn’t’ be supported by 2.79b. But we have indication that the situation might be different in another thread I don’t care to search right now while on mobile.

Sure, but I was replying to GFXManx who I don’t believe has told us what specific hardware is being used there, so I wanted to cover the possibility that it’s just old 2.79 which is a lot more limited in terms of Cycles performance (needs big tiles for GPU which results in “last tile” issues for asymmetrical GPU speeds) and doesn’t support CPU+GPU.

1 Like

Yes, I got that in the second reading. It’s not always a good idea to answer on mobile when dealing with other RL things at the same time.

My bad.

Anyway I think this thread is constructive in it’s entirety and a pleasure to participate.

1 Like

Hi guys,
Pixelfox, no I did not say anything about my hardware setup:

Blender 2.8 Beta as of March 4 2019
Nvidia GFX 1070 8GB Ram
Nvidia GFX 760 4GB Ram
Intel UHD Graphics 630 ( Internal Graphics)

CPU: Intel Core i7 8700K (Coffee Lake)
16 GB dual Channel DDR 4 RAM
MB: Gigabyte Z370P

All 3 cards are detected by Blender 2.8 - Using the bmw27 testfile

GTX 1070 (alone) 02:01:76
GTX 760 (alone) 05:03:72
GTX 1070 + 760 01:38:07

So as a conclusion, you save render time using two cards, in this case I saved 22 Sec
this is about 29 % save, not 50 %. But anyway, it is a good way of saving render time, also remember
the cost of power.

I also tried a GTX 1070 + GTX 760 + CPU
the result then was not good. The cpu was actually slowing down the render time.
It took more than the 1070 alone 06:09:95
The Internal card cant work with the extarnal ones because is not a CUDA Device

How did you get your vega and 1050ti to work together? I was going through some issues after a hardware upgrade so picked up a rx 550 for testing purposes (super slow card. but cheap). after I got everything hashed out I tried pairing it with my GX 780 and couldn’t get blender to render with both the CUDA and CL cards at the same time.

Are the 20 series cards different from previous cards in that they benefit from SLI in blender? I was under the impression that SLI mainly benefited gaming, but cycles took a speed hit from it. That Blender likes two standalone cards at once as opposed to bridging.