Yup. Big navi just seems 2 or 3 fps more than the 3080 in the games shown in 4K, and is unknown what card produces it (no raytracing, no extras). We’ll have to wait until the 28th to know more about these videocards and their capabilities.
Isn’t this bench done on a ryzen 5900x CPU wich provide 40 fps over the 3900x in gaming ?
So i assume it a lower SKU Rx 6000
Lisa Su showed the 3 fan GPU and called it Big Navi. I think these scores are from the top SKU so therefore these absolute frame numbers are disappointing as they will have been set with the world’s best gaming CPU. There is a chance there’s a big reveal on the 28th that these number were not set on the top SKU, Jebaited again!
However, we don’t know pricing and we don’t know the power draw both of which could make Navi a much better choice for gamers. What’s 10% in absolute performance if the Navi GPUs use 1/3 less power compared to some of the outrageous power hogs the AIBs have (or plan to) release? A win in performance/watt is a win, for me anyway. I’m absolutely put off Ampere by its inefficiency.
I hope Navi is good at compute but these are GPUs that are aimed at gaming first and foremost unlike Ampere which is a compute card that’s repurposed for gaming. I hope AMD don’t allow creative workflows to fall down the gap between RDNA2 and CDNA. I hope there is enough compute performance in RDNA2 for rendering and video editing or there’s an affordable cutdown CDNA GPU coming.
Let’s also see what surprises AMD have with architectural advances that might be applicable for creative workflows like DirectStorage support.
It was tested at 4K so the CPU is irrelevant.
Aaah hope it is irrelevant as you say and that RDNA2 can speed Rendering 2 as well. Because we need more competition.
For those who may not know Nirved is embarking on a couple of tasks to improve OpenCL performance and quality of life.
I’m very interested to see how this pans out.
You can follow the work here on the official site: nirved (blender.org)
It seems that AMD is preparing for the acquisition of Xilinx which is strong for FPGAs this precludes that at the moment it has no strength in the world of machine learning, AI, and consequently Raytracing etc …
From this I would venture that Big Navi will not have advanced machine learning and raytracing functions, and will only be competitive in terms of performance …
But it is inevitable that this gap will soon be filled, AMD is forced to do this if it wants to save itself for how things will go in the future.
Since a few years ago Intel had already taken this path by acquiring ALTERA other strong company with FPGAs, this precludes that Intel is preparing to come up with something revolutionary both in the field of gpu and in the field of cpu and other… Even more because recently it officially released the ONEAPI libraries …
This also makes us think about why NVIDIA has decided to acquire ARM while paying for it at a very high price …
I would venture that NVIDIA risks being wiped out by the X86 market or otherwise becoming marginal … and therefore is preparing to become aggressive in the competitive world of ARM, in mobile, and also in the desktop and server antagonistic to the X86 world
The Future is still to be written… few words for good connoisseurs to understand…
DigiTimes has an excellent track record of being right.
Or is this desperate nVidia marketing trying to take the shine off AMD’s launch?
Please SHOUT I’m RENDERING!
Yes ok, this type of configuration is particularly noisy and probably also energy-intensive (I presume)
He’s a monster, a workstation hammer …
But how much less time will you render?
It would be interesting this type of comparison; energy consumption divided by rendering time.
I think having 2 systems with two nv-linked 3090s with 3 slot cooling is preferable in most scenarios tbh.
The best thing is not to buy* these space heaters and wait for nVidia to refresh them on TSMC’s 7nm node to meet the demand. There will be an abundance of buyer’s remorse among early adopters when a much more efficient refresh arrives sooner than many will expect.
*You can’t buy them anyway.
At work we have Quadro RTX 4000 and in CUDA at least they perform equal to my two GTX cards at home. I should do the optix comparision too.
But I quite agree waiting will be good idea for getting a less crazy space heater.
Stunning if true. if true…
I can’t wait to see how @Nirved’s OpenCL compiler optimisations go with AMD’s new GPUs. If these figures are true then shader performance looks like it could be awesome. If true!
Nice. And there is even faster 6900XT. No wonder Nvidia is so nervous
3070 destroys 2080Ti in Optix rendering:
I won’t be surprised if 3060Ti proves to be faster in Optix than 2080Ti as well.
Meanwhile, AMD is revealing their new Radeons now. if their benchmarks end up being accurate, then the company actually did it, they now have beaten (or at least tied) both Intel and Nvidia in terms of performance.
All that they need now are high quality drivers that can do justice for the cards. Otherwise, it is quite possible we will see sizable price drops for Nvida Ampere.