RTX 3080 vs 3080 ti memory size for rendering

Not sure if this the right section, so please move it if it isn’t.
In short I’m looking to buy a card for concept design/concept art/ but I’m having a very difficult time understanding memory sizes and what can be done with those. Since I do concept’s i’m not looking for optimizing file sizes and such. Speed is what counts in my line of work so everything would be dirty and slapped together (completely unoptimized just looking nice)

I’m hanging around a couple of rendering pages where ppl utilize 3090s and a lot of them point out how 80’s and now 80’s Ti simply don’t have enough memory for work. So I’m wondering what kind of work utilizes 20gb of vram vs 10gb. From what I can understand biggest ram consumption is for environments like forests, backdrops and such.

With coming of UE5 and more and hyper realistic photogrammatry assets I’m unsure whether
10gb will be enough for rendering out things in the future.

Depends on the number of triangles and texture maps you are trying to throw at it, “more is more” in this case, here is some examples for scale (all from vray render engine):

If you are in this category, even the 3090 won’t be enough to fit all them triangles and textures (requiring a 48GB GPU like the RTX 8000 or similar)

Moving down a little bit, for things that can fit in 24GB GPU (2 RTX 2080TIs with NVLINK to simulate a shared 22GB of memory) but fail on 16GB or lower (2 RTX 2080s over NVLINK).

You can fill up the blanks based on these examples with regards to how much VRAM you might need, just note that you can still render larger scenes on low vram GPUs in Blender Cycles (using out of core), but it will render slow because it will have to move things between VRAM and system RAM.

Once you start using things like microdisplacement and the like, you are better off just getting a many-core CPU (with at least 32 gigs of RAM) and using CPU rendering, as you would otherwise need 1 or even 2 RTX 3090 cards with NV LInk (which can double, triple, or even quadruple the price of a PC build with the current prices).

Cycles X at least will bring major improvements for CPU users as well (through new rendering algorithms and other things), but the priority early on is GPU users.

Thanks for that. It’s really interesting as most of these scenes, except white room and lake lavina, look quite simple. I would never guess that they need 24gb card.
This one seems to take only 2,6gb but looks very geometry complex.

You might be reading that graph wrong, only “lake lavina” needs 22GB+ to render, all the other scenes fit easily in 8GB (or less) of memory, showcased by the light blue slider of the RTX 2080, (that’s why I only highlighted “lake lavina” in red).

Ah thank you for clearing that up for me. Is it possible to find an example of what would fill a 8-9 gb vram storage? I’m kinda interested if that will be enough in the next 5 years…

It won’t be. 8 gb has been in the mainstream for five years already and is on it’s way out. Same can be said for anything up to 16gb. You don’t even need to wait for UE5 to feel this. Throw in some 4k textures, a displacement for a surface or two and you are knocking on the door of 8gb. Nvidia was caught with the pants down this time, thinking AMD will have nothing to compete with, again, and thus skimped on ram. The core is more than fast enough to take scenes with double the vram (3090 proves this), at each tier of their product stack. The fact that a 3080 (card that’s several times faster) has less vram than a 1080ti is laughable. Even more laughable is the fact that the low end 3060 has more, 12gb, probably making it a card that will be relevant for longer.

But how are AMD cards for Cycles these days?

I think he means in terms of gaming cards.

I would also add that they skimped on VRAM because they didn’t want to make a card that was too good.

I have only experienced Vega 64s, can’t speak for the new 6000 series. Generally, the experience is ok. The kernel compile is fairly quick and not as annoying as I was expecting (previously had gtx 1080s). But with the looming Cycles X and uncertain support for AMD hardware it is hard to recommend going over to the Red team. What I am getting at, at current pricing, state of software development and what products are being offered, neither camp has much appeal at this moment. Certainly there is little to no “future proof” potential.

Well, there is this 3080 Mobile :rofl:

A 3080 but only in name. It is based on the 104 chip, as you can see in the screengrab. That is a 3070 in reality. It is great though, probably the sweet spot in the range.

I know. But the 16GB was one of the reasons for me to buy this DTR. I’m using this machine in the field for software development and for blender / 3D stuff in my spare time.

It is the one I would get as well, for the lovely balance of speed/memory. Nice machine, enjoy.

Instead of an example, let me say this.

Once you start going down (VRAM size), optimizing your scene becomes a necessity, not a choice, and your statment;

Won’t be possible anymore.

If you are the type who over subdivide and throw the highest possible textures, because it looks good when zoomed 1000% then you better think again :slight_smile: .

from blender manual:

and when you look at some textures out there…

16k PNG 3.6 GB, throw two of these and you saturate your 8GB :sweat_smile:

Also, the out-of-core, thing seems to only work with CUDA (correct me if i’m wrong) and your “need for speed” require optix.

Long story short, once you go under a certain amount of VRAM, it’s all about optimizations.

1 Like

Hey, thanks for the reply :slight_smile: No, I’m not really in need of mega textures and subdividing ad infintium. I’m more of a procedural texture guy with 4k textures and perhaps 8k here and there. What I meant by quick and dirty was mostly that I won’t be paying attention to topology/ bad seams /and most obvious modeling overuse of polygons. Everything that doesn’t look right I’ll overpaint, but I was having a hard time figuring out what saturates a GPU either geometry vise or texture wise :slight_smile: