They will be presented 3 November 2022
What follows are rumors:
- Might have display port 2.1 that Nvidia did not put into new 4000 Geforce
- Might have HEVC 10bit 4:2:2 decoder that Nvidia did not put in new 4000 Geforce
They will be presented 3 November 2022
What follows are rumors:
Hereās hoping itās a serious contender to NVIDIA, but Iām not optimistic
From a technical standpoint, AMD appears to have had significant improvement on the graphics front since Rajaās departure, and it has been confirmed that RDNA3 is the first GPU architecture with an MCM design (which will help with yields and possibly make it more viable to undercut Nvidia if they canāt top them in performance). At least that was the case with Ryzen.
Earlier this year; Nvidia is in trouble
Last month; Not looking good for AMD
This week; Is Nvidia in trouble?
Dear rumor mill, make up your mind or keep quiet until the cards actually come out.
Itās tech YouTube, the only way you get views at large scale is by saying something controversial (itās always is X in trouble?) one week and then something entirely different the next this isnāt new, itās been happening since the first days of tech YouTube. Theyāll never āmake up their mindsā because thatās boring and doesnāt generate clicks
At the moment my hope is that they can make cards significantly cheaper than Nvidia due to their non monolith process.
Well, I would think the rumor mill might take a major hit if Elon Musk ends up running Twitter into the ground (because much of the āword on the streetā tech gossip is based entirely on tweets). It would also help for regular media since posted tweets are often being used as a replacement for actual journalism.
I assume you more mean you hope that AMD can/will sell them cheaper. Since even if they can make them cheaper it doesnāt mean they will sell them at a lower price.
It helps of course, since lower cost to make, means even if sold cheaper, the profit is still much the same, but its going depend on a few more factors.
Which then from a Blender point of view, mostly wonāt matter if they donāt have the ray tracing hardware and software support sorted out. Otherwise, OptiX is just going to pound the AMD cards into the ground once again.
That was a limited speculation because i have difficulty to understand where AMD is going. We know that Nvidia goes for performance. AMD is more uneven, they have very strong cards today but not in everything. We know that one area they are not allowed to fail is gaming and since content creation and gaming have been ever closer so i hope they at least make a move to raytracing and AI because of Unreal and games.
Rumors, AMD information at link too.
As per the leaker, the AMD RDNA 3 GPUs featured on the Radeon RX 7000 series graphics cards are delivering up to a 2x performance increase in pure rasterization workloads and over 2x gains within ray tracing workloads. It is not mentioned in the RDNA 2 GPU is the RX 6900 XT or RX 6950 XT but even if we look at the 6900 XT, the RDNA 2 chip offered superior performance in raster vs the RTX 3090 and came pretty close to the RTX 3090 Ti while the RX 6950 XT excelled over it. A 2x improvement in this department would mean that AMD would easily compete and even surpass the performance of the RTX 4090 in a large section of gamers.
In ray tracing, a gain over 2x means that AMD might end up close to or slightly faster than the RTX 30 series āAmpereā graphics cards, depending on the title. The Ada Lovelace RTX 40 series cards do offer much faster ray tracing capabilities, offering close to 2x gains in ray tracing performance over the RTX 30 lineup. So ray tracing will see a major improvement but it may not be able to close the gap with RTX 40 series.
Just curiousity.
How AMD compute raytracing? It uses a specific&proprietary āprotocolā like Nvidia with RTX or it uses a general way?
Thanks in advance
Same question, I donāt understand their take on raytracing. Not having hardware RT is basically giving up on the content creation market, and this nvidia monopoly, like all others, isnāt good for the consumer.
Yes, tech YouTube. Fortunately, other people on Youtube donāt act like complete morons like that. E.g. with respect to Blender, Blender tutorials etc., just to get some extra views via the dumbest of sensationalism.
What do you mean, thatās not true at all? I swear Blender-people arenāt making funny faces and grimaces on their video-thumbnails like the annoying fool behind the news reporter!
Hell, I miss the 240p pre-Google youtube (Iām serious).
greetings, Kologe
@marcatore @Hadriscus AMD have dedicated hardware for RT called Ray Accelerators, but currently its used only in gaming. At least that is what I got from news circulating around.
I didnāt dive into the details but it looks that either RAās do not have a lot of die size dedicated to them, or those cores are not powerful enough to deliver good performance compared to green GPUs.
Iām waiting how this area will look in upcoming RDNA3 GPUs.
The main take on AMD RT in Blender is that RT right now is not implemented at all, so RAās are completely unused. Its really hard to tell how the performance will look like, or how exactly hardware RT will be implemented.
AMD announced in April that they have semi-open OptiX-like library for hardware RT which is called HIP RT:
For those interestedā¦
AMDās Ray Accelerator found in each Compute Unit (2 in every WGP) accelerates ray-box and ray-triangle intersections which are the most performance-intensive parts of the ray-tracing pipeline. However, the step prior to this, BVH (tree) traversal isnāt accelerated by the RAs and is instead offloaded to the shaders (stream processors). While an optimized shader code can perform these calculations in decent render time, in other cases it can slow down the overall rendering pipeline by occupying previous render cycles that could have otherwise been used by the mesh or pixel shaders.
As for why AMD went with this approach, (ā¦) dedicating too much space to a dedicated hardware unit wasnāt ideal, especially since the Infinity Cache was already inflating the die size.
NVIDIAās method essentially involved offloading the entire ray-tracing pipeline to the RT cores. This involves the BVH traversal, box/triangle testing as well as sending the return pointer back to the SM. The SM casts the ray, and then from there onwards to the return hit/miss, the RT core handles basically everything. Furthermore, NVIDIAās Ampere GPUs leverage Simultaneous Computer and Graphics (SCG) which is basically another word for Async Compute. However, unlike AMDās implementation where the compute and graphics pipelines are run together, this allows the scheduler to run ray-tracing workloads on the RT Core and the graphics/compute workload on the SM, in addition to matrix-based calculations on the Tensor cores all at once.
In comparison, AMDās RDNA 2 design offloads the tree traversal to the SIMDs (stream processors) which likely stalls the standard graphics/compute pipeline. Although the impact can be mitigated to an extent by optimized code, itāll still be slower than the full-fledged hardware implementation in most cases.
Full article here:
All it would take to really hurt Nvidia would be a 3rd company to step in and offer an AI accelerator card, something on say 5nm that could do all those matrix math low precision math instructions like FP8 and what not. Because if a non ray tracing GPU such as a GTX1080ti had access on the PCI bus to a pure AI accelerator then it too could run great ray tracing in games? Just a random thought, correct me if im wrong.
If it was that simple, AMD would have already done it
Even if such thing existed it would be orders of magnitude slower as we are comparing an on die solution to a pcie one.
On one side you have the AI/RT accelerators built right inside the SM having access to the fastest L1 cache.
On the other hand, you need to go from the GPU die to itās pcie connector down through the motherboard up the AI/RT accelerator pcie connector, into itās own accelerator die, then do the whole thing back again to the GPU die EVERY time data needs to circulate between themā¦
I am in a business trip, so have got no time to analyse, but here is one presentation for this new cards.