So Nvidia announced it’s new cards today and it looks to be a very nice increase in performance! I wanted to get yawls opinions on these new cards and what you think about ordering two 3080s for the price of a single 3090. That seems smarter to me… or even waiting for the 3080ti for the second. I have enough power to run two of them on my system.
Oh, and as for card size… the Nvidia founders editions cards are HUGE, but EVGA has them slimmed down to sizes very similar to the 20 series, so it may be more practical to fit two of them on your motherboard.
As of version 2.90, Blender now supports NVLink. This means that if you have, say, two 2080ti’s with 11GB of VRAM each, you will now be able to pool their memory and have an effective 22GB of VRAM. The problem is that, of the announced cards, only the 3090 supports NVLink so two 3080’s would still max out at 10GB of VRAM. I currently have a 2070 Super and I’m just going to buy another (they will be cheap now!) and hook them up via NVLink. This will give me approximately 50% more performance than a 2080ti and an effective 16GB of VRAM, which should be fine for me. If the rumored 3080 w/ 20GB of VRAM shows up soon then I might change my mind…
Good plan! I wonder if I’d be better off going that same route with a second 2080 TI, kuz like you said… they’re goona be cheap! Already today on eBay, the used price is half of what it was yesterday.
Will dual cards give me any increase in viewport render speed, or will I only see that at full render?
Ah, that stinks about no NVlink on the 3080… two 3090s is gonna be too expensive for me I think… I mean I could but that seems nuts. I don’t think I need that much… dang, I wonder how much power you would need for two 3090s? I have a 1200W power supply but my system is an X99 which is getting older now. I wonder if anything would bottleneck that GPU setup.
I assume there will be a blower only design like for the rtx cards? Because with the current hybrid design in a multiple gpu setup, the first card will blow hot air on the second card, which will then blow that hot air on the cpu air cooler. If you go for a multiple gpu setup, go for blower design cards.
If there are 16GB 3070TIs and 20GB 3080 coming I would tend to favour a dual card solution of either of these GPUs plus a 3090.
For most work a 3070 and 3080 would be more power efficient than running a 3090. Base and Boost clocks are higher in the 3070 and 3080 so viewport performance may actually be better. The 3090 in the above scenario would be on idle until it’s required for rendering.
If nVidia don’t release 16 GB or 20 GB versions of the above cards then I’d bite the bullet and get dual 3090s though I don’t think they represent great value for money and cooling 700W of GPU for long period under rendering may prove a challenge.
I have 2x1080TIs and I couldn’t drop down to 10 GB to the 3080 it’s too tight on VRAM for rendering and sims etc.
If Eevee gets Marble Run level RT Raytracing then I’ll ditch path tracing with Cycles and just buy a Single 3090. Clement Foucault is obviously working hard to getting Blender into a state to bring in Vulkan which is the gateway to RT Raytracing so it’s coming just not imminently so.
So, in terms of pure rendering time, 2x 3080 (without NVLink) will be faster than 1x 3090, but they will have more power consumption. Correct? Is there a way to estimate how much faster?
Since they’re the same architecture and basically the same clock we can guesstimate it from the CUDA core counts. You have 17408 total CUDA cores across 2x 3080, vs 10,496 on the 3090. So the dual 3080’s would be about 70% faster, although you may lose a bit when one of them finishes their last tile before the other one.
And if your scene doesn’t fit in 10GB but DOES fit in 24GB, out of core penalties can erase all of that.
TL;DR: there is more to it then just Textures (Geometry, BHV, etc…) If I’m not mistaken, you have only 1 texture shared between all the spheres which is loaded once and accessed again and again, instead of loading different stuff.
There are no textures on the spheres (subd cubes), I only tested the geometry vram usage in the spheres scene pictured in my thread. I used separate textures files in my other textures test scene. I would not have been able to so drastically multiply the memory usage if I had only used the same texture over and over again.
Your 2.8 manual reference is a good read, but contradictory with the current 2.9 manual:
2.8 “rendering can be a lot slower, to the point that it is better to render on the CPU.”
2.9 " but will usually still result in a faster render than using CPU rendering."
Your 2012 optix graph is interesting, but I assume outdated? 2.5gb of memory is irrelevant for current gpus, and if I’m reading the graph right, there is significantly less of a performance drop at 6gb of vram?
Regarding LuxcoreRender, I wonder if it applies to cycle, as there is a very good nvidia RTX optimization in cycles.