choose which card

Hello everyone … I’m considering getting a video card to improve the performance of Cycles … I have the chance to have a Nvidia GTX 580, but for a few dollars more I can reach a GTX 590

is that the GTX 590 is two GTX 580 … but in terms of performance, which is better? :spin:

sorry for my bad English

saludos

Hi, GTX 590 is a card with two 580 GPU but reduced clock speed and the 3 GB VRAM is shared between the two cores.
The 590 is very fast but limited by 1536MB for Cycles.
The GTX 580 3 GB is one of the fastest single card for cycles, only 780Ti, Titan and 780 are faster.
If you can get a GTX 580 3 GB for a good prize, take it.

Cheers, mib.

thank you very much for the tips :yes:

saludos

Oh-oh ! I guess because it is sharing two GPUs ?
Are there other cards of the GTX 5xx / 6xx / 7xx family with usable memory limitations?

I have ordered a GTX 760 with 4GB and certainly hope I can use all the memory

Hi csimeon, all GTX X90 cards share memory.
It rumors about a GTX 790 with 6 GB, this would be the fastest card on earth with 3 GB for Cycles. :slight_smile:
I have the GTX 760 4 GB too, don´t worry.
You can use all of the memory minus system, display and so forth (~3-400 MB).

Cheers, mib.

I can only advise against getting any 4xx/5xx GPUs at this point. They all have too little memory to be reliable (except for the 3GB 580, which is rare and likely expensive).
The 6xx/7xx (Kepler) series will support Unified Memory, so the hard memory limit may be gone sooner or later.

The 780 has very good price/performance for Cycles, so I suggest getting that one.

I have been using the GTX 580 Classified (3GB), and it’s a fast render monster, however there’s an issue with it (with mine at least).

It seems prone to CUDA errors, but I am not sure if its because of the card itself or with Blender or Windows. I did submit a bug report to Blender.org, and looks like it was accepted for something to be worked on.

So far this only happened when trying to render indoor scenes with Full/limited Global Illumination. I monitored the card, it didn’t overheat and it didn’t even get near its 3GB limit – CUDA just crashes with this card. And I have no idea why it struggles with this.

On the other hand, a GTX 650 SC (significantly slower) didn’t have a problem with rendering the same scene, even though it took like 8 minutes. And the GTX 780 Ti (about as fast if not faster than the 580) rendered the scene without issues as well. This leads me to believe there’s something wrong with the 580 (at least mine). But with everything else I threw at it, the 580 had it for lunch.

The 780Ti should be significantly faster than the 580. Did you adjust tile sizes to something like 256x256?

This leads me to believe there’s something wrong with the 580 (at least mine). But with everything else I threw at it, the 580 had it for lunch.

The 650,780Ti and the 580 are different architectures, using different program binaries. So it could also be either a bug in Blender that only manifests on the 580, or it could be a bug in the CUDA driver or the compiler.

I adjusted them quite a bit in different configs. With my 580 Classified as a render-only card, it runs very fast, sometimes matching the 780 Ti (or getting very close to).

The 650,780Ti and the 580 are different architectures, using different program binaries. So it could also be either a bug in Blender that only manifests on the 580, or it could be a bug in the CUDA driver or the compiler.

I know. And I believe it is a Blender bug, so I hope the bug report I sent them gets worked on soon. They recommended using a different video card for displays, and leave the 580 for rendering, but that didn’t solve it. The only benefit this provided was giving the 580 a 50% performance boost in render speeds – so it’s not all bad. Of course it becomes irrelevant if it keeps crashing in the middle of a render, but it’s a game of chance for me, 1 out of 5 tries it will actually render a problem scene.