Nvidia 3080 x2 or 3090

So Nvidia announced it’s new cards today and it looks to be a very nice increase in performance! I wanted to get yawls opinions on these new cards and what you think about ordering two 3080s for the price of a single 3090. That seems smarter to me… or even waiting for the 3080ti for the second. I have enough power to run two of them on my system.

What are you all planning? Anyone else upgrading?

I was thinking the same thing, but the specs say there is no NVlink on the 3080, so you would still be bound by 10g vram.

Oh, and as for card size… the Nvidia founders editions cards are HUGE, but EVGA has them slimmed down to sizes very similar to the 20 series, so it may be more practical to fit two of them on your motherboard.

As of version 2.90, Blender now supports NVLink. This means that if you have, say, two 2080ti’s with 11GB of VRAM each, you will now be able to pool their memory and have an effective 22GB of VRAM. The problem is that, of the announced cards, only the 3090 supports NVLink so two 3080’s would still max out at 10GB of VRAM. I currently have a 2070 Super and I’m just going to buy another (they will be cheap now!) and hook them up via NVLink. This will give me approximately 50% more performance than a 2080ti and an effective 16GB of VRAM, which should be fine for me. If the rumored 3080 w/ 20GB of VRAM shows up soon then I might change my mind…

2 Likes

Good plan! I wonder if I’d be better off going that same route with a second 2080 TI, kuz like you said… they’re goona be cheap! Already today on eBay, the used price is half of what it was yesterday.

Will dual cards give me any increase in viewport render speed, or will I only see that at full render?

Ah, that stinks about no NVlink on the 3080… two 3090s is gonna be too expensive for me I think… I mean I could but that seems nuts. I don’t think I need that much… dang, I wonder how much power you would need for two 3090s? I have a 1200W power supply but my system is an X99 which is getting older now. I wonder if anything would bottleneck that GPU setup.

They require 350W each, so 700W for the graphics cards alone…

2 Likes

I’m not certain but I imagine it would.

I am praying for the next Crypto Crash, maybe I can finally afford a used 1080 ti, haha.

For nv-Link to work your motherboard needs SLI certification. Most X99 boards had one but still check to be safe.

1 Like

I assume there will be a blower only design like for the rtx cards? Because with the current hybrid design in a multiple gpu setup, the first card will blow hot air on the second card, which will then blow that hot air on the cpu air cooler. If you go for a multiple gpu setup, go for blower design cards.

1 Like

It’s definitively something to think about, correct me if I’m wrong:

  • rendering times: will be faster on 2x 3080
  • viewport performance: will be better on the 3090
  • memory limit for gpu rendering: 24gb on the 3090 vs 10gb on 2x 3080, No Nvlink to double that
  • NVLink: the 3090 can double its available memory to 48gb with NVLink
  • slots: 3 for the 3090 vs 2 for the 3080
  • Watts: 640w for 2x 3080 vs 350w for 1x 3090, or 700w for 2x 3090

something else to consider?

Apparently there already is a leaked 3070 ti with 16gb memory:

If a super or ti version of the 3080 is released with more memory then it will likely make more sense for the price to get 2x 3080 over 1x 3090.

If there are 16GB 3070TIs and 20GB 3080 coming I would tend to favour a dual card solution of either of these GPUs plus a 3090.

For most work a 3070 and 3080 would be more power efficient than running a 3090. Base and Boost clocks are higher in the 3070 and 3080 so viewport performance may actually be better. The 3090 in the above scenario would be on idle until it’s required for rendering.

If nVidia don’t release 16 GB or 20 GB versions of the above cards then I’d bite the bullet and get dual 3090s though I don’t think they represent great value for money and cooling 700W of GPU for long period under rendering may prove a challenge.

I have 2x1080TIs and I couldn’t drop down to 10 GB to the 3080 it’s too tight on VRAM for rendering and sims etc.

If Eevee gets Marble Run level RT Raytracing then I’ll ditch path tracing with Cycles and just buy a Single 3090. Clement Foucault is obviously working hard to getting Blender into a state to bring in Vulkan which is the gateway to RT Raytracing so it’s coming just not imminently so.

2 Likes

So, in terms of pure rendering time, 2x 3080 (without NVLink) will be faster than 1x 3090, but they will have more power consumption. Correct? Is there a way to estimate how much faster?

Since they’re the same architecture and basically the same clock we can guesstimate it from the CUDA core counts. You have 17408 total CUDA cores across 2x 3080, vs 10,496 on the 3090. So the dual 3080’s would be about 70% faster, although you may lose a bit when one of them finishes their last tile before the other one.

And if your scene doesn’t fit in 10GB but DOES fit in 24GB, out of core penalties can erase all of that.

1 Like

How much does the GPU memory limit matter?

edit: I started a thread about this here: How much does the GPU memory limit matter in 2.9?

Since it is relevant in deciding between the 3080 and the 3090, but I figured it was better to have its own separated thread.

Cycles:

Source: https://wiki.blender.org/wiki/Reference/Release_Notes/2.80/Cycles#GPU_rendering

LuxCoreRender:

Source: https://forums.luxcorerender.org/viewtopic.php?f=5&t=2102

Nvidia: (from a 2012 Optix presentation !)

Source: https://on-demand.gputechconf.com/gtc/2012/presentations/S0366-Optix-Out-of-Core-and-Cpu-Rendering.pdf

TL;DR: there is more to it then just Textures (Geometry, BHV, etc…) If I’m not mistaken, you have only 1 texture shared between all the spheres which is loaded once and accessed again and again, instead of loading different stuff.

2 Likes

There are no textures on the spheres (subd cubes), I only tested the geometry vram usage in the spheres scene pictured in my thread. I used separate textures files in my other textures test scene. I would not have been able to so drastically multiply the memory usage if I had only used the same texture over and over again.

Your 2.8 manual reference is a good read, but contradictory with the current 2.9 manual:

  • 2.8 “rendering can be a lot slower, to the point that it is better to render on the CPU.
  • 2.9 " but will usually still result in a faster render than using CPU rendering."

Your 2012 optix graph is interesting, but I assume outdated? 2.5gb of memory is irrelevant for current gpus, and if I’m reading the graph right, there is significantly less of a performance drop at 6gb of vram?

Regarding LuxcoreRender, I wonder if it applies to cycle, as there is a very good nvidia RTX optimization in cycles.

Interesting did not know that thank you

Some interesting videos from Nvidia:

2 Likes