Spec to consider when choosing a new video card?

I’m going to be building a Windows based computer, for running Blender, as well as Houdini, and Adobe CS6 software.

Im looking into Nvidia video cards and I’m trying to figure which one will be good for me. What are the specs to look at? I’ve been looking at a gtx 580, says it has 512 cuda cores. Also looking at a gtx 680, with 1536 cuda cores. Is this the spec to look at, besides the memory? I’ve heard the 509 series is faster the 600. Is more cuda cores better? I’m looking to run dual card (in SLI), but found a single card with dual gpu processors, for a total of 4 gig memory and 3072 cuda cores.

Am I looking at this right, with regards to gpu rendering within Blender or any other 3D application? Thanks for the help,

jeff

Hi there Jeff. A couple of things that you will want to keep in mind when looking for a CUDA card for GPU based computing. 1) If you have more than one card it is heavily recommended to NOT put them in SLI - that can cause errors or instability. Blender and other CUDA programs can still use both. 2) VRAM across multiple cards does NOT accumulate when using GPU computing. Cards in SLI only use as much as one card has and dual GPUs like the 590 only have only half the stated VRAM. For the best speed/ VRAM combo get a pair (or more) of GTX 580 3GB. The 6xx series work off a new GPU architecture called Kepler which is nowhere near as efficient as the earlier Fermi is. eg. A GTX 680 is around 15-20% slower at GPU computing than a GTX 580 is. I hope that is at all helpful.

Also make sure your power supply meets the requirements for the card you decide upon.

Thanks for the replies. My original intent was to get 2 x gtx 580’s. then I started looking at the gtx 690’s , and started wondering if the would be better? I found this guys video, and he’s running 2 x gtx 590’s, and he rendered the BMW test scene in 19 seconds. I assumed it was in SLI mode, but looking at his description, doesn’t say anything about SLI. If I’m looking to run multiple cards for gpu processing, am I better off getting cards dedicated for that (no monitors hooked up) and then have a card that’s used for my 30 inch monitor?

SLI will actually slow down Cycles renders. Also, keep in mind that no amount of money you spend on GPUs is going to conquer their memory issues. Any serious scene is going to butt up against them very quickly. It is more logical IMO to spend the money on CPU render nodes and wait for GPU rendering tech to get out of infancy stage before investing a ton of money in it.

I didn’t want to turn this into another “here’s the specs of a computer I’m building” thread. But since you mention CPU render nodes, what exactly do you mean by that? I was going to get a 6 core i7 processor. I currently have a Mac 8 core system, and want a windows system as well (some other things I’m doing need a windows PC). Planned on keeping both systems. My mac has an Nvidia Quadro 4000 card, and the gpu rendering is twice as slow as CPU rendering. The thought did occur to me, to use both systems in a render farm, when doing over night rendering. I guess setting up something like that would be a topic for another thread. Can a Mac and a PC be used together in a render farm?

I believe that they can with network render, but don’t quote me on that. For CPU render nodes I would just scour some place like Tiger Direct for their bare-bones deals, they normally run a couple each month. That way you can get super cheap computer parts and create your own little render farm for the same price as a couple of really nice GPUs, and on top of that you won’t have any memory/texture limitations to worry about, and you’re guaranteed backwards and future compatibility with render software.

I get what m9105826 is saying about the RAM limitations being an issue - I have run into it myself a few times - but the speed difference is so overwhelming that I would rather spend the time optimizing most scenes than go to the CPU. The Quadro 4000 you were using may not have been much of a card but a GTX 580 should be around 5 to 10 times quicker than pretty much any CPU you can get right now. I’m not saying don’t do the network render (I do with Luxrender) but getting one GTX 580 is affordable enough to put into whatever system you get for if you can use it.

One other thing in case you are still considering the GPU rendering solution: you can mix different GPUs since they don’t have to be in SLI. That way you can run one dedicated to the display while the other does rendering, or turn them both on to render faster, or even use an AMD Radeon for the display (since they give better display performance) while keeping the GTX rendering.

Plenty of options :slight_smile:

This brings another question to mind. I have a 2009 Mac Pro 8 core. I have the Above mentioned Quadro 4000 for Mac card. Would I maybe see a performance difference if I put my stock card back in, and run my monitor off that, and keep the Quadro for rendering?

Im not seeing horrible performance from my mac, just wondering if I can see better?

Have 26 gigs ram, SSD main drive, and plenty of 7200 rmp back up drives.

If you were to put the stock card back in with the Quadro then you will get much better display performance while rendering with the Quadro but will be unlikely to see much speed improvement on the GPU render. It may help by freeing up a bit of VRAM on the quadro to put more texture or detail in the scene.

For reference, if I run the BMW benchmark on one of my render PCs with a single EVGA Classified GTX 580 3GB then I get the 200 samples done in 48 seconds. On the same system using a quad-core i7 it completes in 4:29. ie. if I were to make a render farm I would need at least 5 systems running high end CPUs to match my one GTX 580. Of course, RAM is always a big deal so keep in mind that, although they are a bit slower, the GTX 670/680 are available in single GPU cards with up to 4GB.

Hi, I’m new to blender and have found that rendering times (especially cycles) are really stretching out the time it takes to get anything done. I realise I need to upgrade, but I’d prefer to do this with as little fuss/cost as possible (wouldn’t everyone). Knowing that blender can use the GPU is it feasible to just go out and get myself a better graphics card and put it in my old pc?

Here’s some of my specs and results from the BMW benchmark:
Render time: 15m 56sec (CPU)
CPU: AMD Athlon 64 X2 Dual Core Processor 5200+
GPU: Nvidia GeForce 9400 GT
OS: Ubuntu 10.04 LTS

Yes, you could push up your system 10x with a new card, my GTX 550Ti 2 GB DDR5 render BMW in 1.50 minutes.
GTX 560Ti is nearly double as fast but cost also double $$.

Cheers, mib.

Since we are at it, I am about to get a new computer with two GTX 580. After reading this tread I am not sure if I should use them in SLI or separately. I mean, can Blender use both cards separately (somehow the same way CPU render uses threads).