I have an i7-4770K with 16GB RAM, and I’ve been interested to get a GPU for faster Blender renders, to use Davinci Resolve in linux, and to hopefully speed up GIMP.
In the past, I was advised (by helpful folks here) that a GTX 750 probably wasn’t going to show much improvement over my CPU.
Nowadays, it seems I can get a 4GB GTX 970 on eBay for <$175. Is that the sweet spot for spending <$175 on a used GPU these days? Or is there a slightly cheaper or more expensive card that would yield more bang for the buck?
I’d be interested in hearing about this too. I’m looking at the gtx 1070 right now, but you could get 2x gtx 970 for that price. You get twice the vram on the 1070, and I’m getting memory errors on my gtx 750ti so I’m a little paranoid about that. Is the 1070 worth twice the price of a gtx 970 though?
Based on the benchmarks of your processor clepsydrae, the gtx 750 ti will still be faster, but by how much will be questionable and if you’re going for large scenes, you might run into the vram issue I’m having. Also, Running a single gpu and doing render tests on blender will pretty much lock your computer up while it’s going, so it slows down productivity. So, even with the gtx 970, you might still prefer your CPU for the majority of the time while doing material tests and stuff.
What might really help the situation is if you already have a passable video card to run your display, and throw in a gtx 970 as well do handle the rendering. Then you could have a faster gpu renderer running while you’re still working in the interface at the same time. You’re gonna have to check your mobo if it’s got a second pci slot for another gpu though, and that your psu can handle the extra power. If you can post your current video card, mobo model, and psu wattage, somebody can let you know what your options are.
With the GTX 970 you will notice an improvement over your CPU.
I do not know the prices in your local currency. As far as I’m concerned, today I would buy a used GTX 970 only if it cost at least 50% less than a new GTX 1060 6GB (yes, 6GB model, not 3GB). 1060 6GB seems a bit slower than GTX 970, but it has real 6GB of vRAM.
@G_I_B_B_O_N, I did not understand your calculations of vRAM. Just in case:
GTX 970 x2 = 4GB total for Cycles.
GTX 1070 = 8GB total for Cycles.
That “GB total for Cycles” is a way of saying. Of course you have to subtract the use of vRAM that the system and the programs use if you use the same card to handle the display.
And remember the problem of slowness in GTX 970 when 3.5GB usage is exceeded
I don’t currently have a video card; using on-board graphics on my Gigabyte GA-Z87X-UD3H. I didn’t realize you could plug in a video card and only use it for the GPU and not display – would that work in combination with on-board graphics? (I do wonder if Davinci Resolve on linux would work with that, or if it wants the GPU to be the display card as well…) PSU is a Corsair TX-650W, current idle draw about 100W – should be plenty adequate, eh?
The on-board graphics are sufficient for my display needs (three 1280x1024 monitors) – I’m just looking for horsepower on the cheap. That said, i don’t mind so much if the system is locked up during previews, really; most of the time I’m just sitting there watching it render anyway.
Sorry, the way I worded that wasn’t clear. I just meant that 1x gtx 1070 has 8GB ram, and no matter how many gtx 970’s you have you’re stuck with 4GB ram. So it seems, vram is still a worry at the 4GB mark.
Clepsydrae, you might wanna go the gtx 1060 route then so you get 6GB ram. If my 750 ti is beating your CPU for rendering, the 1060 should crush it.
I don’t know anything about Davinci Resolve, but as far as I know you just select which video card you want to use in blender for GPU rendering. It doesn’t need to be connected to a display, and because it doesn’t have to handle the display you might even get a bit of a performance boost over having to do both. I don’t know if you can utilize both if you’re using an onboard GPU with another GPU plugged in. But if you can, then you’re probably fine with the 650w power supply. Just make sure you add everything up first. I think the 1060 only requires a 500w.
@clepsydrae, if your budget is short, then go ahead, GTX 970 is a good card. The only thing to fear when you buy a used card is that this is usually not guaranteed.
Scores for these benchmarks often take into account other things such as game performance. My calculations on Cycles is that GTX 970 may be just slightly faster than GTX 1060 (but I’m not sure). I’ve seen in BMW benchmark thread that GTX 1060 rounded the 1:25 min. @Daedalus_MDW, What is the result of BMW27.blend from blenderartists thread with GTX 970 and recent builds of blender (480x270 tile sizes)?
clepsydrae, You could also keep in mind RX480 with OpenCL. New features have been added recently, there are some bugs/problems but it looks good and they’re probably solved (let’s hope):
Thanks for that tip as well – I believe that Davinci Resolve on linux requires an NVidia card, though, so AFAIK I need to restrict myself to those… (they’ve started supporting Radeon on Win/Mac, but I think linux is still NVidia only? It’s really hard to get an answer on that. )
@YAFU, I was looking at the RX480 too. Does it require a special build of blender? Id doesn’t seem like SSS or Volumes are working correctly yet. I was thinking about waiting for the Vega cards coming soon if they’re priced well, but OpenCL seems like a perpetual hassle in one way or another.
Hi, I’m a new member and was looking at the gtx 970 (1x) for my first pc build. I was wondering if you were still using the card and how it performs with blender 2.87. If you could include some types of the scenes or the complexity level that the gtx 970 is doing well in. Thank you for your input.
Hi, so how often do you run into vRAM limitation? What types of scenes, animation are you working on that cause this limitation. What’s the most complex scene you’ve been able to work on before noticing the lag. How bad is the lockup when only using one gtx 970 GPU? Thank you for you help!
Oops! Sorry, I did not see your message (maybe because you are a new user it took time to get published)
Supposedly Blender in master/buildbot and next Blender 2.79 version, OpenCL support the same features as CUDA. But for better information, please ask the question in this thread to see what AMD users say about supported features: