Since my Blender projects are always quite GPU heave and I often reach the memory limit of my GTX 1070, I’m thinking of buying a RTX 3090. I’m doing 3D now for about 8 years, so It’s definitely not a bad idea to invest into my passion.
My original Plan was to completely buy a new PC, since I don’t really want to have a fast GPU, while my CPU and motherboard are comparatively quite outdated. Unfortunately, I currently don’t have the money for a whole new PC (I plan with about 4500€), so It’s still going to take a while.
I was thinking to first by the 3090 (Which I saw for about 1650€), and later on upgrade the rest of my PC. Though, as I said, I’m not sure this is a good Idea, when the rest of my PC is quite outdated.
I bought my current PC in 2017 for 1500€, which includes the monitor and keyboard. Over time I bought a second monitor and upgraded from 16GB to 48GB RAM.
My current specs:
CPU: Intel Core i7-6700K CPU @400GHz
RAM: 48GB
GPU: GTX 1070
Motherboard: Asus Z170 Pro Gaming
Especially with my motherboard I fear that it will somehow degrade the power of that 3090.
Though I’m not really good with PC hardware, so maybe that’s completed bullshit.
Until not so long ago my PC was about the same level as yours, only with a 2080ti as its GPU. I then swapped it out for a Ryzen 5950x-based system that is more-everything - and was rather unimpressed. Bigger RAM, faster SSDs, sure, but not a world of difference speed-wise that I can perceive.
It’s harder to cool/keep quiet too so some days I wish I hadn’t upgraded. I anticipate there will be more of those days in the summer.
So in summary I don’t think you are missing out on much if you are not running into limitations due to your CPU or RAM. The CPU you have is pretty fast for interactivity at least, judging from my recent experience.
If you are certain that your limit is the GPU then a direct swap to a 3090 sounds like a good way to have a meaningful upgrade. I’d make sure that the power supply is strong/new enough too and also invest in a GPU holder to prop that monster up.
can you give me an explenation on why that ist? I was thinking that maybe the motherboard will be a probleme, since the GPU is directly connected to that, right? But why the CPU?
So you dont think I have to upgrade my motherboard for the 3090 to work properly? It just seems a bit weird to put a nearly 2000€ graphics card onto a 100€ motherboard.
Is this just a general system upgrade or is there a specific bottleneck you want to alleviate?
What is the time frame for completing your upgrade (few weeks, a few months, a year)? In terms of getting the most bang for your buck I don’t think buying a GPU right now is the best choice If you plan on completing this upgrade over the course of a few months.
I have no idea about your motherboard. But unless it has no space to mount the card or some PCIE limitation I don’t see how it would make a difference…
Come to think of it - check the space in your case before buying the card. My 2080ti is already a really tight fit in a Fractal Design R4/ now R6 with drive sleds mounted. I imagine a 3090 is even larger.
This is a technical issue. Your CPU can not catch RTX 3090’s bus speed, and for this reason you would meet bottlenecks. I suppose, you want RTX 3090, because you want use with maximum performance. Am I right?
I think for rendering it makes not a big difference if your cpu cannot feed data to the gpu as fast as gpu can receive it. For gaming it makes a difference while permantly textures are changed, but for rendering this is not the case afaik. Even for animation it might not have an big influence till you can nearly render a scene in realtime.
But i can be wrong if you have a scene with heavy texturing where the gpu memory reaches their limit.
Currently I’m working on big environments, where I want to render a camera animation later. On previouse projects this already took multiple days, with very few samples. Also, I’m not happy with the viewport render speed. I’m subscribed to the patreon of Ian Hubert, and the way he can move through his scene with cycles viewport rendering and change stuff ist just so much more artistic and faster then what I’m doing.
Also, with big environments I’m often running out of memory. Most of the time there is a workaround, but this oftentimes degrades the quality a bit.
There is no time frame for completing the upgrade. My dayjob is working in a big VFX House, where I have a powerfull workstation. So the upgrade is just for my personal Computer for personal projects.
I was thinking of investing in a better gpu for over a year now, hoping the price would drop at some point. But it hasn’t really dropped that much, right?
I use Blender with Cycles. Eevee I’m not really using much, except for preview puposes. But for that my current setup is good enough.
I was thinking of getting into UE5, but that is definately not a prio atm.
I play games, but also here I’m fine with my current setup atm.
The main bottleneck is rendering both in viewport and with f12. Faster rendering and less running out of memory is my goal.
In that case a 3090 would mostly achieve what you want. I will ask are you sure your enviroment scene (or future scenes) will fit in the 24GB of VRAM on the 3090? You may find yourself back to square one if it doesn’t.
As for GPU prices, they have finally come down and are close to RRP
For viewport (without modifiers etc.) and UE5, I can say same things, but for Cycles rendering (F12), I don’t think you meet any problem, maybe only when compiling time.
Wait for the 4000 series coming later this year (maybe during the summer). The 4090 is said to be almost 100% faster than the 3090. The performance of a 3090 might be matched with a 4070 for much less in cost. And don’t forget to buy a new psu for these gpus. They’re going to be very power hungry.
Don’t they always give exciting benchmark figures like that early on? I recall it was about the same last time at least.
The bunch of years I’ve been following this it was about a 20 - 30% real world performance increase every generation if you compared same hardware tier (older ‘Ti’ model only vs newer ‘Ti’ and so on).
The downside being that they always increased power consumption along with it.