Next-Gen GPU

Look, I’m not a fan of nvidia, as I have already told you, I see a targeted war taking place to dampen the enthusiasm of users, and in the world of competition it is an option when it does not have the strength and above all the time to compete … is perfectly logical.

p.s. if I were a youtuber and I got a bit of under-bank pennies to do a bad review … I would do one of these, maybe the defect of the cheaper components exists, but it is based on the overclocking crash … strange … that is to think about it it is not strange at all … it is made on purpose to dampen the spirits.
who do you think has such a high engineering knowledge to understand and take these defects to the extreme to make them evident?
and above all who do you think spread them?
:grin:

Have you never read Igor’s Labs or watched Actually Hardware Overclocking? There’s plenty of knowledge in the tech press.

I think you’re being suspicious about people doing exactly what they should be doing, calling out cheapskate designs. In terms of technical ability it doesn’t appear to have required much technical knowledge to spot that GPUs that use 5 or 6 large black capacitors appear to crash more often when overclocking than those which use less and more of the small MLCC type.

EVGA have confirmed that their own testing confirmed large Caps cause instability. Spotting the pattern of crashes was not rocket science nor did it need inside info to work it out. I think it was fairly easy for reviewers to pin down the differences in GPUs.

I do not agree that there’s a targeted war against nVidia taking place. Cheapskate designs are being called out and nVidia’s hyperbollocks marketing is being called out. This is a good thing for the consumer IMHO. I think we’ll agree to disagree on this point.

On 2. Keep in mind Nvidia is using a 8nm process. Shrink it to a 5nm and increase the efficiency as such to compare apples to apples. See what I did there. :wink: I’m looking to compare the architecture efficiency to the other architecture when they are as close to the same as possible.

On 3. 70% would give 5.14 OB/TDP for Apple making it 2.86x more efficient than Nvidia at 8nm. If efficiency increases directly with size we get Nvidia at 5 nm should get (8x8)/(5x5) = 2.56x more efficient. It seems they are really close. AMD pulls extra tricks to make it even better so theirs will go beyond simple die size efficiency. AMD is also just rolling out RT cores so we have to see how those do.

The crashing is because of the AIB used bad components for overclocking. If you use the card in regular clocks, as they were designed as, the crashing won’t happen apparently. OC should not always be a given.

Also no matter how fast the new amd card is going to be the compiling of opencl shaders in blender still takes 30s-50s.

2 Likes

I meant the A13 TDP is given for the whole chip and there are also e.g. CPU cores. I assumed 70% of TDP=6W to be related to GPU. There are no CPU cores in Nvidia chip.

the war is certainly there, and not only for the gpu itself, but also because nvidia is preparing to acquire ARM, because it is doing great performance on the stock market (and probably damping spirits has more purpose for this than for the gpu itself) and because it is probably preparing to dominate the market and probably change it significantly, so I assume that if there wasn’t a war now, that would be nonsense.

Ah right, so you want to pick up the goal posts and move them without providing a single example to support your argument.

Many of these failing cards were sold as cards for overclocking, that’s the issue. They were failing with factory overclocks applied by the AIB partners and only became stable when ‘underclocked’ back to reference levels.

The customer paid a premium for these AIB cards and they rightfully expected the cards to perform as advertised.

I said in an earlier post it appeared that nVidia hadn’t left the board partners much room to overclock as they themselves have virtually maxed out the clock speeds that’s why these GPUs are such power hogs. Ampere appears to be well over the efficiency curve.

When people begin to think about these cards more rationally they will come to the conclusion that Ampere is nVidia’s Vega moment. Vega was a crap gaming platform but it was an excellent compute platform and still is.

AMD with Vega’s lacklustre gaming performance decided that they would split architectures RDNA for gaming and CDNA for datacentre compute. I am expecting AMD will beat nVidia with RDNA2 not just price/performance but also power/performance for gaming.

My hunch is Ampere will beat RDNA2 for compute based workloads like rendering. If AMD do release a prosumer version of CDNA then it’s game on.

Edit.
I’m told CDNA for prosumers is unlikely and wouldn’t happen until 2021 anyway. I also got told the nVidia Crashing could be fixed with drivers that could tame the max core frequency and alter voltages so any issue cause by choice of capacitors or not as the case may be will be resolved in software.

What do you mean “without giving a single example to support my argument”?
What do you think this set of conjectures all in the same period indicate?
What do you think corporations explicitly wage war on each other?
Do you believe this ??
hahaha

You began by saying there was an ‘aggressive campaign on defects’ and of these defects weren’t enough to create the ‘media fuss’. Which implied that it’s a contrived situation by someone. But you won’t say who.

Why would a few GPUs crashing have any bearing on the ARM acquisition by nVidia? The news might be big in gaming circles but you can bet this is a non news event everywhere else. Gaming is where nVidia sells its sloppy seconds to the gullible.

nVidia’s acquisition of ARM is to do with its business in the datacentre and embedded systems, if it were Quadros or one of the other datacentre specific lines of hardware shipped with faults it would raise a few eyebrows amongst investors because this is where nVidia makes most of its money. Even then it would be a huge stretch to link the story to the ARM acquisition because that’s 18 months away before the ink is dry.

Overclocked gaming grade B Stock that isn’t allowed anywhere near a datacentre by nVidia is only of concern to those who’ve been unlucky enough to buy them and the tiny bubble of tech tubers who make a living with this reportage. It’s big news in a tiny bubble but will be forgotten in a week.

Corporations are at war all the time but YOU tried to make it sound like this was different. It isn’t.

1 Like

EEVEE renders with CUDA and OpenCL… what?

Wow a typo, let’s discredit everything in that chart.

3090 with 360mm AIO from EVGA

Source: https://videocardz.com/newz/evga-flagship-geforce-rtx-3090-kingpin-hybrid-pictured-in-full

1 Like

Steve explains the crash-gate while mountain biking.

It’s worth watching as it’s quite surprising how nVidia deals with their AIBs. I have much more sympathy for the AIBs after watching this video.

It seemed that the bug was a driver issue in windows as everything worked as it should in Linux.

2 Likes

Soooo… in the end this one of the cheap capacitors, it was a pumped media fuss based on a suspicion …

Why am I not surprised? :love_you_gesture:t2::grin:

Because you want to believe what you want to believe.

If you understood the constraints the AIBs have to work under with partial drivers (that they can do nothing about) and seeing instability some chose to alter the capacitor config from the reference to improve stability other choose to use cheaper components and save money.

EVGA Statement , "[…]During our mass production QC testing we discovered a full 6 POSCAPs solution cannot pass the real world applications testing. It took almost a week of R&D effort to find the cause and reduce the POSCAPs to 4 and add 20 MLCC caps prior to shipping production boards, this is why the EVGA GeForce RTX 3080 FTW3 series was delayed at launch. There were no 6 POSCAP production EVGA GeForce RTX 3080 FTW3 boards shipped.

As long as GPUs pass the nVidia mandated tests using the nVidia test box they can ship with bits of wet string instead of capacitors and that looks like what happened. The board partners are under huge price constraints and it’s clear some chose to use cheaper components than others and paid the price when the full drivers + games + little johnny showed up the instability.

The issue has now been rebranded a driver issue because nVidia have tuned the drivers to prevent certain interference frequencies being generated thus stability guaranteed even on boards with ‘cheap capacitors’.

To say it’s a Windows driver issue because the card didn’t crash on Linux is daft because a different OS means it’s a different driver. I was a consultant in Electromagnetic Compatibility and have seen many cases where OS or firmware versions have drastically changed the EMI profile of a system under test.

A colleague of mine was working for a company performing the EMC testing on one of the early incredibly famous games consoles, it failed testing and it was recommended that a ‘choke’ was fitted to reduce radio frequency emissions. The console maker called a full board meeting to discuss the addition of a $0.30 component but when you think these boxes are built in their tens of millions the cost and cut in profits stack up quickly. So I can absolutely see why some GPU manufactures wanted to choose cheaper components if at all possible providing the hardware met the mandated tests before shipping.

I think it was clear that some board partners saw the type of capacitors used as a competitive advantage for ensuring best performance of their product while others chose saving costs.

1 Like

Is a new technology, still completely new and still to be well tested and to become solid, both in hardware and in drivers, they pushed it to the maximum to wreak havoc on the competition (and to do audiance for other operations on the market, but you believe that are not connected) the very first stocks went bad and gave the possibility (and breath) to the competition to raise a media fuss.
Then the problem was solved quickly by refining and fixing the errors via hardware by putting more solid capacitors, and for stocks already produced via drivers by slightly decreasing the voltages and cycles, for now, probably when the drivers improve, they will bring the performance back to maximum, Especially because it is evident that on linux these problems don’t occur at all.
The concrete result of this operation is that the competition has a minimum of breathing space, because by dampening the spirits, it will take a while before people rush to buy new nvidia GPUs, enough time for the competition to be able to quickly put on something that is minimally comparable on the market.

2990WX was a bad CPU and doesnt perform well in most applications.