CUDA Compatible graphics card?

I’ve just spent the last 6 hours trying to get Blender to work with an old NVIDIA card on my work machine and have given up. The Boss is willing to pay for a card in the $100 dollar range. Will a GeForce GTX 650 work with the latest build of Blender (2.65a)?
The computer is a Dell dual-core 2.3 mHz with 3gb of ram running Windows 7 Home Premium 32 bit

After the mess this afternoon, I’m afraid of wasting his money…

A GTX 650 should work for CUDA, GPU mode rendering in cycles.

Personally i’ve used 2.65a with a GTX 260, GTX 540m, and GTX 660ti. Haven’t had a problem with any of those 3 cards.

What card did you spend 6 hours working on today?

Yes it will, but it’s perhaps not the best value for money, depending if RAM or speed is more important to you. Check out the benchmark to see the relative performance, and note the GXT-5xx series being mostly faster than the GTX-6xx series.

I had an old GeForce 8400 GS. It’s certainly not 1.3 compatible, which I figured out after all of the testing, but I also seem to have a bit of a jinx when it comes to electronics. I wanted to be as certain as I could before pulling the trigger with someone else’s money.

The GTX 650 had a compatibility rating of 3.0, and the GTX 500 series were rated at 2.0; does that not have a bearing on speed with Blender? What is it a measurement of? It may be moot, however; searching Newegg, the cheapest 500 series was higher than my budget.

That compute compatibility number is the version of cuda that the video card supports. Nvidia continues to improve the CUDA platform. Newer cards tend to support newer versions of CUDA that have new features and what not. Think of it like you do windows version. Some applications require windows XP or higher, some work on 3.11 for workgroups. Blender requires a card that supports CUDA 1.3 or higher (Someone may want to confirm that for me, i’m 98% sure i remember reading somewhere that blender required 1.3 or higher). The version of CUDA your card supports doesn’t seriously impact the speed of renders. Just need to have a card that supports 1.3 or higher. Actual performance with blender (speed of rendering and maximum scene size) is affected more by the number of CUDA cores the card has, the speed it’s clocked at, amount of memory and a few other technical things. A GTX 650 will work. It may take a little longer on the render times than a more powerful card but you’ll at least have some renders to show to the boss man. I’ve seen people create stunning renders on laptops with mobile graphics cards in them. If you start doing some serious renders and making the boss money, I’m sure you can make the case to justify the upgrade cost to a more powerful card.

No, it doesn’t mater. You just need compatibility with CUDA 2.0 or higher. Actually, in many use cases, the GTX-5xx series are not only cheaper than the GTX-6xx series, but quite a bit faster too, due to the GTX-6xx being optimized for gaming. If you take a look in the FAQ linked in my signature, you’ll find quite a bit more information on all this.

I do myself own both a GTX-680 and two GTX-580s for example, and I can tell you from personal experience that the GTX-580 are quite a bit faster (individually) that the GTX-680. I addition to the FAQ I’d also recommend taking a look at the Cycles benchmark to get a clearer picture.

If you wouldn’t mind me being a little more specific then, I found a GTX 650 and a GTX 550 for $120 each.

EVGA SuperClocked 01G-P4-2652-KR GeForce GTX 650 1GB 128-bit GDDR5
· Core Clock: 1202 MHz
· CUDA Cores: 384
· Effective Memory Clock: 5000MHz

ECS NGTX550TI-1GPLI-F1 GeForce GTX 550 Ti (Fermi) 1GB 192-bit GDDR5
· Core Clock: 900MHz
· Shader Clock: 1800MHz
· CUDA Cores: 192

I see that the 650 has more CUDA cores and a faster clock, but the 550 has better RAM. Which would render faster?

You guys have been so helpful and explained so much; thank you!

pick a random one with 2.0 or higher .

Come on now, a tiny bit of independent research won’t kill you :wink:

olesk, there were no GTX 650s in the benchmarks or the spreadsheets, so I had to check out cards that I’m not buying. I hope the comparisons are accurate. Looking at 680s and 580s on newegg, it looks like the only spec going for the 500 series is the speed of the ram. How is it that, even with way fewer CUDA cores, the 500 series is still faster?

This is due to optimizations for gaming in the new GTX-6xx series, at the expense of CUDA compute ability. I have both GTX-580 and GTX-680 here, and I can tell you from personal experience that the GTX-580 is significantly faster. The GTX-680 has more RAM though, so in certain scenes, I still use the GTX-680 as I can’t fit the scene into the 1.5GB on the GTX-580 (my GTX-680 has 4GB of VRAM).

I’m afraid I don’t know the GTX-650, so I can’t help you there, but I can guarantee you a very good performance if you buy a GTX-580. The only card better is the GTX-590, which is really just two GTX-580s squeezed onto a single board. Both are pricey though…

Most of the points we have discussed here are all available in an FAQ I’ve set up - take a look there to cover all the bases (the answer to your GTX-5xx/GTX-6xx questions is a direct copy & paste from there).

Thanks, guys, for your help, and thanks as well, olesk for the links. I was able to bring the budget up a little bit and pull the trigger on an ASUS GTX 560. MSI had a cheaper one at newegg, but the reviews were just too spotty. The 580, even at $300 was just out of my price range. I look forward to some shorter renders and/or higher quality ones depending on my time at the desk.

Thanks again for your help and patience!