Nvidia RTX4000 series and Blender

First rumours about performance in Blender Cycles indicate a 2.2x increase in speed. Not know to what cards models this refer to.
Cycles benefits more then Arnold GPU and Vray GPU but those are also in 2x territory.


But you can’t afford it

I personally don’t care this time around. NVIDIA produces good video card. Especially for production. But they are full of s*** as a company. They’re going to manipulate the market again as much as they can and lie about a lot of stuff to hype up their new product.

And the sheer compute power isn’t the only factor to consider. Let’s see the VRAM capacity and power consumption when they’re released. Than we can objectively assess their cards.


I don’t understand this overall reaction, they are one of the things that make Blender competitive and develop actively for Blender.
If they would have ignored Blender they would not get this negative opinion. Or maybe they would be accused of being in collusion with other renders… :upside_down_face:
I wish AMD and others would get same results, but getting mediocre results gets you less worse PR.

I share your concerns about VRAM.


Rumored lineup looks like this, lets see how it turns out. Previous generation was suppose to have double VRAM they got. But invidia never pampered the users with too much vram, so this table with only minor vram uplift seems posible.

Graphics Card Memory / Bus Memory Clock / Bandwidth TBP
NVIDIA Titan A / GeForce RTX 40? 48 GB / 384-bit 24 Gbps / 1.15 TB/s ~800W
NVIDIA GeForce RTX 4090 Ti 24 GB / 384-bit 24 Gbps / 1.15 TB/s ~600W
NVIDIA GeForce RTX 4090 24 GB / 384-bit 21 Gbps / 1.00 TB/s ~450W
NVIDIA GeForce RTX 4080 ? 16 GB / 256-bit 21 Gbps / 1.00 TB/s ~340W
NVIDIA GeForce RTX 4080 ? 12 GB / 192-bit 23 Gbps / 552 GB/s ~340W
NVIDIA GeForce RTX 4070 12 GB / 192-bit 21 Gbps / 504 GB/s ~285W
NVIDIA GeForce RTX 4060 Ti 10 GB / 160-bit 17.5 Gbps / 350 GB/s ~275W
NVIDIA GeForce RTX 4060 8 GB / 128-bit 17 Gbps / 272 GB/s ~235W

Obviously good support from NVIDIA on software side is a big plus but I don’t believe it’s that big of a factor in Blender’s success.

NVIDIA is a huge company and there is some bad and some good comes with that. On some sides it does a lot of good. Especially for Blender recently. But to be honest it’s in the their best interests.

On the other side they need to extract the most money from their products providing as little value as possible while staying competitive with AMD and here is where marketing, hype and all other shenanigans come into play. If they produce a good product for a competitive price I don’t have a problem with that. But for the last two years it was all sort of games with consumers to hike the prices while delivering less value. This I don’t care about. Thus I don’t care about their product and possible performance leaks and other stuff until cards are released and properly tested.


Whether they support Blender or not is not relevant to the question whether they are manipulating the market.


My main concern is the VRAM for laptop which have been inferior to desktop counterparts.

So, how is it bad? it is also in AMD best interests. Apple, Intel…the results? In great scheme of things it is a very small interest for Nvidia, only because there are people there that care for Blender this can happen.

Supply won’t be a issue this time I hope, I want a 4090 and have money set aside for it ! :star_struck:

The VRAM cooling on the 3000 series was less than ideal, hopefully they addressed that in the 4000 series.

The wattage on this gen is absolutely insane. You’re looking at over a kilowatt for any PC with the higher end cards.

1 Like

I ran 2 3090 on my 900 watt PSU, I should be able to run 1 4090 with that…so it’s not that bad.

In case there is a change of process from 3000 8nm to 4000 5nm, Nvidia would have to have made a very bad job if some of the power numbers rumored are real.

A bit off-topic but nonetheless related to the power consumption of these high-end graphics cards (and the potential requirement to get a 1000W PSU): aren’t you guys worried about the cost of electricity (especially since the recent increase of energy costs)?

The next generation Ryzen chips have seen a big uptick in TDP despite the smaller process, too.

I think all the big companies have gone back to a straight number crunching race and thrown efficiency out the window, and it couldn’t have happened at a worse time.

1 Like

You have to make the numbers. What improvement in performance Vs the increase in power. An RTX card is more efficient than a Mac rendering in Cycles X because it is much faster so spends much less time at max power, if the max power 3 times but you are 5 times faster it compensates.

There is another point that is the consumption in idle and other uses like web surfing , movies etc. My 130w RTX3060 is at 13-20w most of the time. The 5800H goes down to 8-9w. If this numbers can go even less due to specific economic cores it also can compensate overall consumption.

We already have mobile GPU with 5 teraflops performance and 64Gb/s bandwidth, so desktop computer with discrete GPU have issues in power efficiency.

I would not invest on those. I believe we soon get more efficient, integrated solution to desktop computer because when I render images, I can dedicate RAM ja CPU/GPU to do that job.

In real time applications when CPU is doing other things, discrete VRAM and GPU can make difference in future but I see that there is still issue while data is fetched from SSD to RAM, maybe processed with CPU and then moved to graphics card. And that is akward. I can see that in real time applications it should be more efficient to connect fast SSD to discrete graphics gard so it can stream assets without using CPU/RAM.

We kind of need GPU-oriented computer and hardware level support to stream assets to VRAM and GPU where CPU is just helper. And that kind of hardware improvement requires huge changes in software and large parts of Blender needed to rewrite. Maybe we see that in next gen consoles first.

For near future, we likely get much improved integrated CPU/GPU with shared RAM that works current software architecture that takes lot less electricity than those 4xxx series cards.

Not worried, computer hardware like GPU and electric are tax write off for me since I do freelance work from home(but I am in the USA, not Europe(ouch).

There is rumour of a 48GB version maybe the 4090 Ti, but that’s probably not going to surface until a year or 2 later.

Well, this seems to be like a bulls**** info about the performance. This is another reason why I don’t buy into hype anymore. Only real tests for released products.

NBNB rule: no benchmarks - no buy


Of course, this is just the discussion start since Nvidia will present them at GTC in 20th September so in less than 2 weeks. I don’t know when embargo drops but i expect in that day some independent sites can release benchmarks.