Oops, well we are definitely missing some key factor here that is responsible for the slowdowns which might show up in one scene and not show in another.
Between 1.55x and 1.9x faster for CUDA from these tests.
Between 1.53x and 1.80x faster for OptiX from these tests.
On a side note, and I know this is not news, but OptiX from these tests is up to 2.53x times faster than Cuda…and all we have to do is tick a box in the system preferences! Big thank you to the devs. Looking forward to having all cycles features supported, I wonder if the combined cpu and gpu rendering will deliver better performance.
Wow! That seems like a really bad move… it’s as if they don’t want to give productivity users too much value in the 30 series cards. Are they trying to steer us toward Quadro? A pair of Titan RTX with NVLINK would seem the better option for those of us who need lots of vram.
Edit: Reading the article over again, it is unclear to me what they mean by “SLI will only be supported when implemented natively within the game.”… Games, who cares… but, does this mean that SLI (memory sharing included?) will be supported in apps like Blender, but the developer has to do it themselves, without the help of the drivers?
SLI is only used in games to combine 2 GPUs (processing power) to give you more fps with the “main” gpu is the one using the VRAM while the “secondary” gpu is helping with more cuda cores and such. It never had the possibility to combine VRAM. (SLI never took of, to be considered a compelling feature)
Most modern games don’t go beyond 8GB VRAM @4k, which is where most GPUs targeted for gaming sit, some games might need more, and that’s where the xx80TIs comes in play.
You don’t need SLI to use 2+ gpus for rendering, you just stack them and hit render.
NVLINK on the other hand is meant to replace SLI and is meant more towards professional workload where memory sharing is more valuable (rendering, deep learning, etc…), there is no “main” and “secondary” here, both GPUs act as “one” thanks to the crazy speed of the NVLINK bridge bypassing the pcie “slow” speed.
This is at least how I understand things, feel free to correct me.
Thank you for the explanation. I didn’t know they were different. So then, this announcement about not supporting SLI is pretty much a moot point! Good to know
I don’t think anybody knows this for sure at this point. But it is what some are guessing.
"One major question has been how AMD’s RDNA2 will compete against these new GPUs. Rumors have suggested that the company might leverage higher RAM loadouts, with a 16GB RDNA2 GPU positioned near or directly against an equivalent 10GB RTX 3080.
They now do, It is a new feature. Cards connected with a NVLink bridge now support memory pooling in 2.9.
NVLink support for CUDA and OptiX. When enabled in the Cycles device preferences, GPUs connected with an NVLink bridge will share memory to support rendering bigger scenes.
So if you want to take advantage of memory pooling you need; a NVLink (not sli) compatible card, a NVLink Bridge of the right size, and like I have already posted in this thread, it is highly recommended with the 20’s RTX generation to get blower card design in multi gpu setups for better heat dissipation efficiency. see here
It is still to be confirmed if the same recommendation applies to the new 30’s cards. But as we now know from the leaks, Gigabyte has a “Turbo” model coming, which if we go by their current turbo models it should be a Blower design. Since regular fans setups have better cooling and less noise in single card configuration, I would assume that this blower model will be released for multi gpus setups, we will see.