Nvidia 3080 x2 or 3090

Oops, well we are definitely missing some key factor here that is responsible for the slowdowns which might show up in one scene and not show in another.

  • same textures / different textures.
  • unique objects / duplicate objects / linked objects.
  • hardware tested (DDR3-4/???Mhz VS GDDR5-5X-6-6X/HBM-2)
  • other sneaky culprits è_é

The results would differ from one scenario to another.

I am facing the same question.

Maybe I will go with 2x3080. I was hoping to break the VRAM limit with octanes “Out of core” option. But I dont know ist this a good solution?

I wonder if RTX IO will eventually help with the out of core rendering penalty.

1 Like

First Blender numbers are out for the 3080:

Source and card review: https://www.youtube.com/watch?v=AG_ZHi3tuyk

Classroom scene:

Source: https://techgage.com/article/nvidia-geforce-rtx-3080-rendering-performance/

2 Likes

In other words:

3080 CUDA performance vs 2080ti

  • 1.55x faster / Bmw scene
  • 1.64x faster / Pavillon Barcelone scene
  • 1.90x faster / Classroom

3080 OptiX performance vs 2080ti

  • 1.53x faster / Bmw scene
  • 1.80x faster / Pavillon Barcelone scene
  • 1.58x faster / Classroom

Between 1.55x and 1.9x faster for CUDA from these tests.
Between 1.53x and 1.80x faster for OptiX from these tests.

On a side note, and I know this is not news, but OptiX from these tests is up to 2.53x times faster than Cuda…and all we have to do is tick a box in the system preferences! Big thank you to the devs. Looking forward to having all cycles features supported, I wonder if the combined cpu and gpu rendering will deliver better performance.

Viewport performance: on average 15 fps more than the 2080ti.
And 10 fps more than the Rtx Titan.

source: https://techgage.com/article/nvidia-geforce-rtx-3080-rendering-performance/


It looks like only no-SLI configuration will be supported with 2 x 3090 (without VRAM doubling). Native SLI integration is canceled.

Wow! That seems like a really bad move… it’s as if they don’t want to give productivity users too much value in the 30 series cards. Are they trying to steer us toward Quadro? A pair of Titan RTX with NVLINK would seem the better option for those of us who need lots of vram.

Edit: Reading the article over again, it is unclear to me what they mean by “SLI will only be supported when implemented natively within the game.”… Games, who cares… but, does this mean that SLI (memory sharing included?) will be supported in apps like Blender, but the developer has to do it themselves, without the help of the drivers?

SLI =/= NVLINK

SLI is only used in games to combine 2 GPUs (processing power) to give you more fps with the “main” gpu is the one using the VRAM while the “secondary” gpu is helping with more cuda cores and such. It never had the possibility to combine VRAM. (SLI never took of, to be considered a compelling feature)

Most modern games don’t go beyond 8GB VRAM @4k, which is where most GPUs targeted for gaming sit, some games might need more, and that’s where the xx80TIs comes in play.

You don’t need SLI to use 2+ gpus for rendering, you just stack them and hit render.

NVLINK on the other hand is meant to replace SLI and is meant more towards professional workload where memory sharing is more valuable (rendering, deep learning, etc…), there is no “main” and “secondary” here, both GPUs act as “one” thanks to the crazy speed of the NVLINK bridge bypassing the pcie “slow” speed.

This is at least how I understand things, feel free to correct me.

2 Likes

Thank you for the explanation. I didn’t know they were different. So then, this announcement about not supporting SLI is pretty much a moot point! Good to know :slight_smile:

This render took 50 minutes on two quadro RTX 8000s (no NVLINK) and failed to render on x2 quadro RTX 6000, RTX Titan and 2080TIs.

Notice how NVLINK made the render slower on the x2 Quadro 8000s to provide the shared memory benefit (shared memory yes, but at a speed cost)

poor 2080TI even with NVLINK (22 GB) it’s still not enough for such monstrous scenes

Source:

Viewport speed is mainly cpu single thread speed

1 Like

Looks like the 3080 20GB is real.

Gigabyte’s Watch Dogs Legion code redeeming website lists many graphics cards that have not been announced by the manufacturer.

Source > https://videocardz.com/newz/gigabyte-confirms-geforce-rtx-3060-8gb-rtx-3070-16gb-and-rtx-3080-20gb

And we still have to see what AMD comes up with.

More explanation about the SLI confusion

1 Like

Will Nvidia really release the 3080 20gb in late October to counter Big Navi? Seems awful close to the 10gb release last week.

I don’t think anybody knows this for sure at this point. But it is what some are guessing.

"One major question has been how AMD’s RDNA2 will compete against these new GPUs. Rumors have suggested that the company might leverage higher RAM loadouts, with a 16GB RDNA2 GPU positioned near or directly against an equivalent 10GB RTX 3080.

A new leak suggests Nvidia has a plan for that eventuality: doubled-up VRAM on the RTX 3080 and RTX 3070." https://www.extremetech.com/gaming/315214-high-vram-variants-of-rtx-3080-3070-upcoming-3060-confirmed

Now “released” might not be the correct word, and they might rather get “announced”. We will see.

1 Like

The question is, will they use the new GDDR6X or the already existing GDDR6 ?

GDDR6X memory is only available at 1GB version with 2 GB to come later in 2021.

IF a new 3080 with 20GB is to be added, it will have 3 options:

  • Copy the 3090 layout (memory on both sides of the PCB) with GDDR6X.
  • Keep memory chips on one side but use GDDR6.
  • Wait till 2021 to be able to use GDDR6X on one side of the PCB.

a guy in a thread i started said it does not stack. I run dual gpu. Looking to upgrade. should i stick with dual or go for 1 card?
now i dont know what to believe lol

Since not too long ago, Blender supports Nvlink with memory sharing. As I just learned, Nvlink isn’t the same as SLI.

They now do, It is a new feature. Cards connected with a NVLink bridge now support memory pooling in 2.9.

  • NVLink support for CUDA and OptiX. When enabled in the Cycles device preferences, GPUs connected with an NVLink bridge will share memory to support rendering bigger scenes.

https://wiki.blender.org/wiki/Reference/Release_Notes/2.90/Cycles
(If you want memory pooling with the new generation, only the 3090 currently has NVlink)

So if you want to take advantage of memory pooling you need; a NVLink (not sli) compatible card, a NVLink Bridge of the right size, and like I have already posted in this thread, it is highly recommended with the 20’s RTX generation to get blower card design in multi gpu setups for better heat dissipation efficiency. see here

It is still to be confirmed if the same recommendation applies to the new 30’s cards. But as we now know from the leaks, Gigabyte has a “Turbo” model coming, which if we go by their current turbo models it should be a Blower design. Since regular fans setups have better cooling and less noise in single card configuration, I would assume that this blower model will be released for multi gpus setups, we will see.