RTX 5090 raytracing performance

Hey, what’s going on? Do these cards even exist? No leaks?

Pretty expensive, especially for something that isn’t that significant of a jump from the previous generation. The card costs more than all the other parts for my custom tailored PC I built back in 2021 put together.

Looking forward to the day when these prices are no longer the norm. The AI stuff just feels like a software cover up to make the hardware look a lot better than it really is. More like a paid for software update that is used as a crutch to make up for how unoptimised games are nowadays.

Will probably wait for a 6090 before replacing my 3060.

5 Likes

Reviews are under embargo until the end of the month.

2 Likes

They all render pretty fast anyway. For me the biggest point is memory. 32GB sounds good. The AI stuff for game framerates is something I certainly don’t caare about. But all the gamers do so they will want the new stuff. I think I am just happy they are coming out so previous generations might get cheaper. I am thinking 3090s still have 24GB - that’s pretty awesome as well, let’s see how low the prices drop. If I can get a used one for something lower than 700 euros, that would sound reasonable. It’s a good card, I can get a few years of rendering out of it.

6 Likes

According to Justin Walker, the GeForce desktop product manager:
RTX 5090 vs RTX 4090 (Native RT / No DLSS) = +15%
RTX 5080 vs RTX 4080 (Native RT / No DLSS) = +15%
RTX 5070 Ti vs RTX 4070 Ti (Native RT / No DLSS) = +20%
RTX 5070 vs RTX 4070 (Native RT / No DLSS) = +20%

Source: https://www.neowin.net/news/nvidia-admits-rtx-5070-nowhere-near-4090-performance-without-dlss-fake-frames/

2 Likes

From what I am seeing, especially competitive gamers seem to turn those features off. As long as ray tracing doesn’t give them an advantage, which then requires some fancy upscaling technique (which may introduce distractions), I don’t think they are interested in it.
I remember several years ago, they promoted some real time hair thing which was included in some games, like the Witcher 3. It was a hyped marketing thing and I haven’t seen anyone actually using it. At the time, I remember checking many YouTubers to see whether they enabled it and as far as I remember, most tried it out a little, then turned it off, because it was a distraction for them.
From what I have seen over the years, marketing people, tech bros, tech influencers are interested in that stuff, the typical early adopters. As most of those show cases don’t have actual value in the long run, they end up not being used.

the 32GB VRAM would be nice but i am hoping more that the upcoming big APUs by AMD and NVIDIA start some competition with Apple’s M SOCs.

a desktop version of strix halo without the thermal constraints of notebooks would be nice for example.

Yea, but there was bigger improvements in last few gens. Especially last time with 40xx series. Thus marketing had to show even better improvement this time around. I’m guessing that’s why they chose those specific graphs showing “5070 is better than 4090”.

1 Like

And considering the big increase in power draw (28% for the RTX 5090) and the fact that they stayed with the same manufacturing process for this generation, the fifth generation seems to be a marginal improvement over the previous.

We will have a better idea once the hardware is actually out, but the fact that they focused so heavily on DLSS for benchmarks is quite indicative of this, I think.

Come on gamers, it’s AI! It must be awesome, you all NEED this… :smiley:

I don’t know… I am hopeful. :smiley: Competitive sounds like it’s a good idea to use top hardware anyway and regular ones… well there are so many. Surely large number will want it to have AI in any case. AI AI AI that’s the future. Who cares if it actually helps?..

But it does help somewhat. The games look smoother from what I see even if there are some minor problems. Come on, those are games. Is it overpaying for something that one doesn’t need? When did that ever stop consumers in the area of entertainment?..

By now, I wonder if Nvidia has ever considered selling a ‘build your own GPU’ configuration in the same vein you can custom build a PC. Part of the reason for this is because the GPU has become more of a larger configuration of parts compared to what it used to be.

Choose your CUDA chip, choose your RTX chip, choose your tensor chip, choose your shader chip, choose your VRAM amount. Mix and match according to your needs (as opposed to having to get the power-hungry monster at the top of the line to get that large amount of VRAM for rendering). I know you can always undervolt, but unless you build your own machine you will only find configurations that assume it will be running full blast.

There are videos on Youtube covering actual mods that add more VRAM to older and/or lower-end models, so it is not like it is impossible. A lot of people may very well buy into such an option even if Nvidia charges a premium.

1 Like

that would be nice indeed, however AMD libraries support for rendering is quite bad. I hope nvidia is going to start to do something in that regards, since optix is much better. I curious, for example, to see what the nvidia DIGITS mini computer, or the newly annouced laptop N1x SOC can do for rendering.

I am also a bit disappointed with raster performance improvement for the 5090 compared to the 4090, especially when you think the power consumption has increased that much. I just checked the 4090 presentation and Nvidia claimed a 2x raster performance increase over 3090, which is pretty much what we got in blender, so I am afraid the 30% is probably correct. Let’s wait and see, but this generation seems heavily focused on AI…

1 Like

People I know who buy this sort of hardware for gaming are experimenting a lot to find settings they like. According to them, AI features are more often than not deactivated for various reasons. I certainly don’t know the statistics of a broad range of users.

If you use ray tracing it is for sure smoother with DLSS. On the other hand, it is even smoother and more consistent if you don’t use ray tracing and that’s what I am told happens a lot (not representative at all of course).

If you take away the fancy AI stuff, you still end up with a more beefy GPU.

It may be that people around me are not that much into hype stuff. After all, if they were, I would likely not be around them :smiley:

Personally, I’ve always considered the 4090 to be an outlier.

My impression might not be completely accurate, as I don’t overly obsess over hardware charts. But the 4090 felt like nVidia threw EVERYTHING at it - whether a great idea or not (heat, power, etc) - to prove a point.

So I don’t have a “you have to do it again” expectation with every future series.

4 Likes

Well, they haven’t and will certainly not do that ever. Why would they? What’s the point from their perspective? To do significantly more work and help the customers spend less? That makes no sense at all.

1 Like

Who said Nvidia would have people building the things by hand, get the engineers from their recently created robotics department over and have them create a machine that assembles the order from a pile of chips and other parts.

If they choose not to do so however, then hopefully someone from AMD and Intel will get this idea (which means Nvidia not helping itself but to prove it can do it better).

I am not saying that wouldn’t be awesome. It would. I would absolutely love that. It will never happen though. Nvidia will never care about things like that. There is no gain for them in this. This could be a thing for some small company that might be trying to distinguish themselves as different and this way attract some niche customers, but nobody like that has needed chip design and manufacturing capabilities. Nvidia don’t care about stuff like this and never will. This is also not such a trivial thing to do from engineering perspective, not for a massive company like Nvidia who need to maintain their reputation. Experimenting may lead to mistakes and there is simply no gain in this. Especially when they are leaders in their field already. There is no need to experiment with stuff like that. AI is a different thing, it is full of opportunities because of all the hype in the short term and in the long term AI technology is really promising and there will probably be actual breakthroughs in the future that are actually going to change the world, so yeah, they will experiment with things like that, not with some nice ideas that would make lives of a few thousands of us nerds happier.

And another point about “us nerds” - we are a minority. Sure these forums are full of technical people and I am sure most of us view building our own workstations as a fun and pleasurable activity, but regular people, regular consumers don’t even do that. Who is going to want to build their own GPU? Many of us in these forums, sure. But we are a minority. We are not a huge part of Nvidia’s customer base. Just think about it. What GPUs are we all using? Gaming GPUs! They are not even targeted at us, they are made for games. :smiley: :smiley: :smiley: What sounds like a very good idea to us, might not work for everyone Nvidia want to sell their products to.

No. 4090 is cut down version of full core
14_nvidia_titan_rtx_opublikowano_wideo_ukazujace_elementy_konstrukcji_poteznego_gpu_ktory_nigdy_nie_doczekal_sie_debiutu_0

They were preparing 4090 Ti / Titan RTX Ada, but they didn’t released it.

This time its also the case and 5090 is not full core, but about 90% of it.

Basicaly, all nvidia GPUs are class lower than they used to be before Ada.


source HardwareUnboxed

1 Like

Will never happen, since it’s just not practical. While you could somewhat do it for a pure specialized card, and Nvidia already do this. So you can buy a very specific AI card, which is packed full of tensor cores but what be total crap at say playing a game.

But even then, they are part of a single die chip.
The moment you try to do what you are saying and make each part as chip itself, outside of various other issues that creates, you hit the biggest one that kills the idea from the start.
You then have to inter-connect each CUDA, RTX, Tensor and shader chip together with some sort of cross, high speed bus, which then also all connects to the VRAM. And you have to do that over a PCB. It would be both a massive, super complex PCB, with all those traces and it would also be slow as hell, compared to it all sitting within a single chip.

4 Likes

first test

3 Likes