Blender Performance on the Radeon 9070XT

Hello, friends!

The new AMD Radeon 9070XT GPUs are a bit cheaper than the Nvidia RTX 5070Ti here in Portugal, ranging from 900 to 1400 Euros, respectively.

I’ve been using Nvidia cards since the 3DFX Voodoo 3 (yes, not crazy Nvidia bought 3DFX at some point and integrated their technology), so switching to Radeon isn’t a decision I can take lightly.

I know there are Blender users here who work with AMD GPUs, so I’d love to hear your thoughts and experiences, specifically regarding Blender and Eevee-Next. How well does it perform in your workflow?

3 Likes

Blender does not work good with AMD cards. Blender’s Cycles render pipeline optimized for Nvidia CUDA.

I had a AMD RX 570 GPU once, and I encountered many problems. For this reason I sold this card and bought Nvidia card.

Hey @Hikmet! Thanks for the feedback!

Yeah, in the old days, before 2015, that was the gold rule… but today we have HIP to replace Cuda and I’ve got some feedback that since RNA3 cards they are even working faster in rasterization than Nvidia ones.

…Radeon not being good for Blender can now be just a mith :slight_smile:
We need some Blender Mith Busters on this! :laughing:

Once AMD even removed OpenGL from their gaming cards… Which is just unacceptable for Blender. I don’t want to update a driver and not being able to work any more for unpredictable time.

See Discussion in cycles thread

Their hardware is good but the software isn’t there yet

4 Likes

Thanks Galnart!
:pensive: Damn! It was 500 Euros cheaper for practically the same thing, but I’m getting the picture!

1 Like

All in all it can be a fairly easy choice.
If your goal is general modelling/sculpting and rendering ONLY using Eevee, then a AMD card maybe worth looking at.
Mind you, if that’s the goal, it could be half argued that an Intel GPU would then also be worth looking at.

However, if you mostly plan to render using Cycles, then it’s game over, just get a Nvidia GPU.

At the same time, if you may end up doing any video editing (effects/filters, etc) or any local AI (which can even pop up in said video editor or image editor, etc), then things will be better supported and faster with a Nvidia GPU.

The only real reason to consider AMD, is if you have a very limited budget and the AMD card is WAY cheaper. Or maybe Blender is just a very light, side interest and mostly the GPU will be used for gaming. Then, again depending on price, AMD could be an option.

4 Likes

I am running an AMD set up since 2020. Since the rewrite of Cycles and move from OpenCL to HIP, cycles works well. It may not be on par with Optix yet, we have to wait for final build of HIP RT and perhaps next generation of AMD hardware, but the speed is enough and it has never crashed on me.

Some people here say, that unless you use Nvidia GPU, it is game over. What is game over is paying 900+ for a 12 GB left over silicone (4070-5070) when your scene requires more ram. It is game over, when you realize that Nvidia makes less than 10% of their revenue from gaming GPUs. It is reasonable to assume that sub par hardware like the recent 5000 series is the way of the future as the good silicone will be allocated to data center chips.

Nvidia has built up a powerful mindshare among gamers and professionals alike that unless you are running one of their cards, its game over. That is no longer true. One should evaluate their own professional needs and purchase hardware that best meets their requirements. Does it really matter if the scene is completed in 50 or in 80 seconds? Does the difference warrant extra 700 euros? I think not.

I chose 6800XT over 3060 and 7900XTX over 4070. My scenes rarely run longer than 5 minutes, so a minute here or there makes no difference. What makes a difference is that I can get a decent amount of memory for a reasonable price. Currently in my region 5070ti goes for 1400 euros, while 5080 is around 1800. A 5090 is between 3000 and 3500 euros. These prices are quite frankly ridiculous. I will be eyeing the 9070 series closely and will likely buy a set of them to replace my 6800XTs if the uplift proves to be good. I can get two of the Radeons for the price of a single 5070 ti, it is an easy choice.

5 Likes

Hello @thetony20 , thanks for the feedback!
I mostly use Eevee. On my day job, I make video game trailers like this one:

All made with Blender and rendered in Eevee. This is the average my PC has to deal with, and I’m OK with renders taking up to 1 full minute per frame. :slight_smile:

Although I occasionally have to use Cycles when working with colleagues who use it as their main render engine.

I have an RTX 3060 12GB of RAM, one of those more expensive, overclocked ones.
But both for work and for our personal short animation projects, like Clash On Little Pond, which we share the WIP here on the forum, I’ll have to upgrade soon. 800 Euros for a good graphics card with at least 16GB of RAM is OK, but 1400 for the same thing is not OK.


Hey @psvedas , thanks for your feedback too!
I agree with you on all your points, and the prices you mention are identical to here.
The prices don’t even make any sense, and they have no excuse for it other than “because they can” and people somehow buy it… well, at least the 5 or 10 cards made available. :confused:

I had the Titan (€1000) and later bought the Titan X (€1100)… so let’s say the 2090 would be €1200, the 3090 (€1300), the 4090 (€1400), and finally the 5090 (€1500)… that I could understand.
… But now, €3400 for something that burns!? I leave renders running through the night! I don’t want to set my house on fire. :laughing:

Indeed, the Radeons seem to be more reasonably priced, but even so, the 9070XT, which should be $600, is sold here, at the cheapest, at €900. And if you go for the Asus TUF OC, some stores charge between €1000 and €1200…
… It’s still a mid-range card. :neutral_face:

4 Likes

Okay, so this discussion is worrying me a bit, so I did some test renders for a frame of a big animation scene I’m working on (over 1 million polygons, lots of 1K textures, only one directional light source but it does use ray tracing in Eevee and Path Tracing in Cycles). My current laptop has both an NVidia RTX 3060 card (when plugged in at full power) plus an AMD CPU+GPU for when it’s unplugged from the wall and I need to save a bit of battery. I got the computer back in April of 2021, so there is a possibility that newer laptops are generally faster for CPU-only rendering just because they use newer parts.

Naturally, CUDA + my RTX 3060 renders the scene the absolute fastest on Eevee with ray tracing on, at an average of about 10-15 seconds per frame, but HIP with both my AMD CPU and GPU at the same time renders at a respectable 30-35 seconds per frame.

Interestingly, I get a good middle ground if I switch the render engine to Cycles, only use it on CPU (no GPU), lower the final samples to only 8 (as opposed to the default of–gulp–4,000 samples), and turn on Denoise without touching any of its default settings–the result is only 25-30 seconds per frame for an arguably-more-realistic path traced render! Of course, Cycles is significantly faster when I use CUDA + an NVidia GPU, but you get the idea.

So yeah, at least in my case, rendering in CUDA is noticeably faster, but HIP and AMD CPUs+GPUs also render somewhat-unoptimized scenes at respectable times, to the point where you have to ask yourself if over-paying for an NVidia card in 2025 is really worth it just for somewhat-faster render times in Eevee with ray tracing on–especially since, if you can figure out a way to get away with extremely low samples in Cycles and the Denoise function, you could get some surprisingly-fast renders just on a CPU, maybe even feasibly use Cycles to render long animations!

Finally, of course if you want TRUE real-time rendering, game engines are your only true options, not Cycles and Eevee. Godot is perhaps the most lightweight one out there, whether you set the graphics quality to Compatibility, Mobile or Desktop/Console, and doesn’t care what GPU you use it with so long as you have any relatively-recent GPU at all, and for large teams, Unreal Engine 5 seems to be a dream engine for bigger productions (e.g. The Amazing Digital Circus episodes are rendered in UE).

All in all, while having an NVidia GPU is certainly nice, it really isn’t the end of the world if you have to settle for any other GPU, especially once you learn and get better at optimizing your textures, file sizes and renders.

1 Like

Here the stock of the Radeons are quite reasonable. There are a few retailers selling the beefed up models for 850-900, but there are three fan cards from Powercolor that go for 750, which when accounted for taxes, is a MSRP price. The good thing about the Radeons is that they tend to go on sale, some time after the release hype calms down. The same can’t be said for the Nvidia cards.

You bring a good point regarding the decisions made by Nvidia. The new, rather unnecessary power delivery system is a concern, even if the incidents are “rare”.

Then there is the product segmentation. The current stack has the memory configuration that does not follow the power of the chip. A 5060ti has more memory than a 5070, the same as a 5070ti and a 5080. Whichever one you buy you get a bad deal either from a memory standpoint or performance. This has been the case for the 3000 and 4000 series as well. Now, there are rumors of the super variants that may potentially use 3GB modules, giving hope for 18 and 24GB cards. However that may not pan out, if there is no pressure and demand from the market.

At this point, Nvidia just does not care. The GPU market is only a fraction of their business revenue and it is expected to shrink even more. The trend is easily observed over the past 4 generations of product release. Each time you get slightly less in terms of an upgrade than the last time, the 5000 series being particularly mediocre.

1 Like

I assume you are actually using the OptiX option in preferences for Cycles rendering and not the CUDA one. On a RTX based GPU, OptiX will be around twice as fast as CUDA.

As for price of AMD vs Nvidia, yes, Nvidia are somewhat of a rip-off (even more so right now with such limited supply), but at the same time, if you buy the right card at a good time, one doesn’t need to buy 3 AMD GPU’s (one from each generation) in order to get better performance.

I’ve had my RTX 3080 Ti for well over 2 years now and it’s still faster then anything AMD has released. And saving 30 seconds in render time of a scene sure does matter, especially when rendering a 300 frame animation.

2 Likes

I run a multi GPU set up for multitasking purposes. Supplementary GPUs do the rendering, while the main card is used for working on a different project. I found this set up to be suitable for my habit of working and I do not feel short changed for speed. At the same time, a 12 GB 3080ti would be a completely useless card, as most of my scenes exceed the given buffer. This was my main gripe with Nvidia in the past as well. Their cards would provide a decent turn of speed but would shortly be made obsolete by a very limited memory buffer. I have had two generations of cards suffer the same fate and will not look at Nvidia again until they change their product segmentation.

As for the prices, I doubt they will improve much. The previous generation was selling above the MSRP throughout the shelf life and even went up a bit before the 5000 was introduced as available supply was drying up. Fact is, they get away by charging more for less. They are in a business/market position where product scarcity does not hurt them.

On the whole, I feel like the PC is in a particularly difficult position right now. The available hardware is becoming stagnant and difficult to obtain while increasing in price with each passing generation. I find myself looking at moving over to Mac in a not too distant future if the current trends remain the same. Their latest chips offer decent pace with an almost unlimited amount of shared Ram. I feel that today it is the memory pool rather than speed that is the limiting factor.

1 Like

I think nowadays hardware offers competent pace and the raw performance is material for the marketing. You can arrange your hardware and software in many creative ways to meet deadlines without an issue.

When you look at the project as a whole, raw rendering performance is but a fraction of the entire time budget. You could spend a lot more time trying to optimize the scene to work within the 12 GB buffer than you would gain having a faster GPU. There are scenes that simply require more memory and there is no way getting around it.

Limited memory could also be a factor that forces to compromise on product quality, denying the use of high resolution textures, geometry assets and some effects like displacement. In this regard I feel like the Apples approach could unlock more opportunities rather than a much faster Nvidia GPU that is artificially held back by its RAM.

1 Like

Thanks for all the comments, I’ve read them all!

Yes, nowadays, the speed at which we can transfer data through the entire processing pipeline is more crucial than raw processing power.
Even when compiling shaders, the key factor is how fast we can move all the textures from the hard drive to the GPU VRAM, and, of course, whether they all fit within the available VRAM or not.

For Blender work alone, I’d say 32GB would be a good target for the next three years. I have 12GB of VRAM and 64GB of system RAM, and with the new Vulkan mode, it’s draining both while working on the Clash On Little Pond scene… and it’s not even that heavy, at least not after I converted all the textures to .dds BC3 DXT5 (compressed textures optimized for the GPU, saving four times the memory compared to .png).

2 Likes

Maybe where you are, but not where I am. There was a point when many of the 40 series (even the 4090) was on good sales and below MSRP. If I didn’t already have a 3080 Ti and not pushing the 12GB, I could have got a 4070 Ti Super with 16GB for less then I paid for the 3080 Ti.

Maybe, but that can tend to have some pretty significant performance limits and lets face it, Apple charge even more for decent amounts of RAM on their systems (which in pretty much all cases is bought as is and can’t be upgraded).

You still may want to look into a RTX 4070 Ti Super, even though it’s not the current newest generation. Performance wise it’s pretty much on par with a 5070 Ti and it’s got that sweet spot 16GB of Vram as well but given the introduction of the 50** series it may be more affordable now.

1 Like

The 4070ti super, while having an MSRP of 799, was never available for less than a 1000. Even now, whatever inventory is still available goes for 1000-1200. That seems quite a lot for a mid range card.

As for Apple, power wise they are not the best value proposition, but if you need more memory it is an option. Plus you get a very potent CPU included. Nvidia only has one card that offers more than 16, and it goes for 3000-3500 euros. Which is not exactly a value proposition.

2 Likes

Like I said, that sounds more like a local issue, so may have just as much to do with government/resellers, etc.
Here, while there was stock (it’s all gone now, since they are no longer being made), before the 50 series mess was known about, one could get a good 4070 Ti Super for under $1500AUD.

Yeah, a very very expensive option.

On that I do agree. Right now it is totally messed up and the worse time to buy a GPU.

Mind you, not sure about your area, but down here, if you want more then 16GB VRAM on an AMD card, ones options also seem to be very limited or none at all. The previous series cards with 20 or 24GB look to be all sold out, leaving only the new 9070XT with 16GB.

So it’s buy second hand or go without if what you need is GPU’s with high VRAM.

1 Like

There might be some confusion. I plan on doing the final render in Eevee, and in that case, CUDA actually seems to be a few milliseconds faster than Optix. When I try to render the scene in Cycles, it’s the opposite–CUDA is a second slower while Optix is a second faster–not exactly twice as fast, but a smidge faster. I turned off my AMD GPU because apparently trying to use both slows down the render significantly, suggesting the AMD GPU is the bottleneck preventing my NVidia GPU from running at its full potential when both are used.

It’s interesting that it seems to be the exact opposite depending on whether the final render is Eevee (CUDA is slightly better) vs. Cycles (Optix is somewhat faster). Eevee, of course, is faster overall than Cycles ever could be, so I’d rather stick with that.

1 Like

Actually, after comparing the Eevee + Ray Tracing render with the Cycles render, I think it’s worth the extra time to go with the more realistic shadows in the Cycles version. Thanks for the tip on how to speed up Cycles renders on NVidia GPUs, so it won’t be THAT much slower when it’s time to render the whole thing.