Next-Gen GPU

I’m not talking about what “raytracing” is doing with the highest end cards. Concerning OpenCL VS Cuda; If you look at these graphs, cards like the 5700 XT, RX Vega 64, Radeon VII are having a hard time versus the 1660 or even 1080.

For a professional who is making money out of the hardware he buys (in this case faster GPU) and Taking into account the 2 years between each new generation of hardware, one could easily afford the cost of the “next” faster piece of hardware which translate into faster renders/iterations and more (bigger) projects to take, which in turn translate into more money in the pocket in the same period of time.

For such individual, I don’t think it would be a problem.

Your old hardware isn’t suddenly rendering slower then it used to, you can use it as extra muscle or sell it to help reduce the cost of the new buy.

In the other hand if you are more of a consumer (a gamer ?) you wouldn’t care about what feature this piece of hardware have over this other one, you pay $xyz for xyzFPS and you forget about upgrading for the next 4-6 years.

Of course, and we all know the benefits of upgrading and staying on the cutting edge as a professional. These are obvious comments.

Of course things are rendering slower with OpenCL, there are numerous threads about the issue; the worse being the jump from 2.80 to 2.81, 2.82 seemed to get it back to 2.80 but unfortunately it’s been regressing again.
All this can be tested with an artist baseline scene with each version.

Like I said it is what it is…

True but as a hobbyist I’m more interested in the price, compatibility with Blender, and render times. Looking at these numbers the RTX 3060 will be really fast for a low price + optix +CUDA +DLSS and other goodies.

1 Like

I am a professional and I am in the market for new hardware and money is not an issue.

The limited VRAM and missing nvlink of the 3080 makes it a very poor choice for professional work, 10GB really is not a lot. I am yet to be convinced 2x3090 can be adequately cooled for extended rendering, I mean days at a time without water cooling. The nVidia branded GPUs will be in extremely short supply and AIB GPUs will focus on gaming style 3 fan cooling. 3090s with Hybrid AIOs appear to be an absolute necessity for dual installations and early indications are they’re going to be over $2000 a piece.

While money isn’t an issue there’s a limit to how much I’m prepared to be pickpocketed for a pair of room heaters which aren’t that much faster than GPUs half their price.

If nVidia had a 3080 with 16GB no nvlink or a 3080 10GB with nvlink I’d find them far more attractive than either of the current offerings, neither of which screams buy me. They miss the mark.

1 Like

Yes, I think the 3060 segment will be excellent value and not be an unnecessary space heater.

I’m on a 5 year old GFX card (amd r9 390) and it’s struggling. I didn’t find AMD or nVidia making anything good enough for 350€ but now it looks promising to find something worthy for that kind of money.

1 Like

not sure about the space heating part, you can clearly see that the 3080 is peaking at 75°C much “cooler” then the 5700 XT (76°C), Radeon VII (85°C) Titan RTX (79°C), and 2080TI (77°C), if you already had pair of Titans/2080TI the room temps would be the same if not lower. One of the benefits of moving to a smaller manufacturing process (same final render, less time, less heat and less energy cost).

As for the 3070TI / 3080TI with 16/20GB there are numerous rumors about them being “planed”

Well, you could stick to the version with the highest peak performance, update Blender later when they fix it or move to Vulkan, OpenCL in general is being avoided by devs targeting GPU rendering (Redshift, Octane, don’t even run on OpenCL, let alone perform worst), Idk how accurate is this statement, but it seems like it’s one of the issues.

Source:

1 Like

I did see a FB post from the Octane group were in the CEO of Otoy stated “OpenCL is Dead”.
So I was just kinda assuming that was the same thoughts from the Blender Devs as they transition to Vulcan.

im in the same position and thats it, i hoped that this series will have more vram than the previous one but nop, and on top of that no nvlink for the 3080

2 Likes

Voidium, I think you’ve used the wrong formula to calculate the %'s. The formula should be (3080 time / 2080ti time)-1. The 3080 is 37% faster than a 2080ti at rendering the Classroom with Optix, according to the graph shown on the Techgage article.

1 Like

Thanks for the correction, i modified them to reflect the new values.

1 Like

These temperatures have nothing to do with heating of the room, these are the GPU temps, a measure of how good the GPU cooling is.

The power draw of the card is the important figure. AIBs have confirmed the 3090 can draw nearly 500W under load so with two of them going full blast is a 1 KiloWatt space heater belching heat into your room, so much for less heat and less energy cost.

1 Like

I think buying in haste will ultimately end up with a serious case of buyer’s remorse. If AMD come anywhere near the 3080 with more VRAM it’ll force nVidia to respond and if AMD comes close to the 3080 they may make a very compelling argument.

The rumours from the better connected leakers and data miners suggest a high end GPU with 80 CUs and a Pro-sumer version of the CDNA compute GPU with 128CUs aka Arcturus. If true this could be game changing. Lisa Su has said on record and as CEO she has a responsibility to not mislead investors that AMD will be releasing a halo product and leadership performance. Leadership performance might mean performance per Watt but equally it could mean absolute performance.

If OpenCL is dead someone should quickly tell the scientific community I work with, LOL.

From these charts I’m getting. @Voidium below is what I calculated.

3080 % faster than 2080ti
Mr. Elephant: -1+154/127 = 21.25% faster
BMW: 53.57% faster
Classroom CUDA: 90.7%
Classroom OptiX: 58%
Blender 4k viewport: -1+79/64= 23.4%

3080 % faster than 2070 super which is about same speed as 2080 regular
Mr. Elephant: -1+230/127 = 81.1% faster
BMW: 103.6% faster
Classroom CUDA: 111.6%
Classroom OptiX: 83.9%
Blender 4k viewport: -1+79/47= 68%

It all seems right. 3080 is about 2x faster than the 2080. The marketing was mostly not a lie. OptiX performance is a little less in the test shown, but not by much. Looking at the Linus Tech Tips video he shows


So I’m guessing it averages about 2x over the 2080 with OptiX.

Here is the full video for anyone that wants to see it. Linus and his crew do good work. They also confirm in the video the case heat is lower than the 2080ti as expected from half the heat going out the side.

@anon71893420
NV linking 2x 2080ti would be a good idea to get 22GB of memory and -1+31/(49/2)= 26.5% faster in Classroom using OptiX than 3080 for under $1400. I expect the 3090 will be 20% faster than the 3080. 2x2080ti would take 2x250w= 500w of power where the 3090 is 24GB at 350W with about the same speed. The 3090 is more power efficient. So you get more RAM and more rendering per watt with the 3090 which means less heat in the room and less electricity bill for the same render. Nvidia is not stupid with their pricing. They did the math.

As for AIB saying 3090 takes 500W they probably mean total system power. Linus confirmed with his special kit the 3080 average peak was about 320W though it did spike to 350W at times. 3090 probably peaks at 390W with average peak at 350W.

Seeing how Nvidia over promissed - 2x over 2080 but in every single benchmark (so far) its more like 30 to 70% range… pending game.

I have yet to see any CUDA benchmark of rtx 3080. Did I miss one?

my first calculations were exactly similar to yours, it seems everyone is using a slightly different formula to reach the same goal, from a given performance metrics perspective.

you could say:

  • 3080 renders twice as fast as 2080 super (x2 or 200%)
  • 3080 cuts render time in half compared to 2080 super (50% less)

so that 2080 twice as fast as 2080 (not supper) has sown to be inaccurate in every single benchmark. Only in pure Ray traced one, it gest “close” to 2x.

As such I’d wait for actual benchmarks. Nvidia over promissed and underdelivered.

It is better 30-70% pending on which game engine, but direct compute, haven’t seen one yet.

Found one on this site, (dont’ know it at all though) - mid page shows blender 2.90 results - CUDA & OPTIX

Still, it does look like its even better then 2x over 2080 super… interesting.

if other sites confirm this test results, then dam I’d be happy to be wrong.