Will 2x gpu cards really get limited to the lowest ram value?

So I’m looking at a new pc build.

I currently have a 780 6gb and can add another card. Reading around the forum using them in SLI would be slower, but can be dealt with, but If the other card has 2-4gb of vram then whilst the performance would be greater cuda memory would be limited to the lower amount of vram.

So this might not be worthwhile doing with a fast cheaper card and only worth matching? Is this still the case?

I’d love to ponder over the possibilites of E3V2 dual Xeons but it does complicate threads.

This is still the case. GPUs don’t share memory, so the scene has to be loaded into each card’s individual VRAM. If one card runs out of memory, the render will crash.

To be honest I was looking at the 2gb cards. But would the limit be 2gb or 4gb? Tbh I’d rather just keep the 6gb and use one card. But its interesting to know. Especially as using to two gpus cores and 1 vram would be awesome where you have 6gb or above. I guess a few 12gb Titan owners are gonna feel this too, whre they could add a pretty quick 4gb card.

Thanks

1x 6GB card + 1x 2GB card = 2 x 2GB cards
You scene would need to fit into 2GB vram

Thanks, and 2gb is nowhere near enough. So I’ll save my money.

Thanks guys