Two GPU. One for display other for render

I have a Quadro 2000 and it is very powerfull to display objects. Other hand, it’s horrible to render.
I’ll buy a new GPU, maybe a GTX 580 1,5 GB.
So, how can I set the Quadro 2000 to display and GTX 580 to render in cycles?

Just put the new card in, but keep your monitors connected to the Quadro. Then pick the new GPU as the compute device in preferences. It should “just work”. (Make sure your power supply is adequate)

Note that the 1.5GB of RAM will limit you to simple scenes.

Thank you Zalamander, very helpful. I’ll try to buy other card.

I would get a card with at least 3Gb of vRAM but it`s really up to you and what tasks you want to do. You wont be able to do serious architecture visualisation with just 1,5Gb of vRAM for example.

Ok Bloodwork. I’ve had serious memory problems and poor performance with my Quadro. I believe the best card for modeling and visualization remains the Quadro family. But to render… argh.And yes, some of my problems are exactly memory to render. The Quadro 2000 has 01 GB and has given me a lot of headaches.

For viewport performance, yes, Quadro and Firepro series are made for just that. But for rendering youll need a lot of memory. For example I cant render some decent shrubery into my archviz because I lack memory and my card has 2Gb of vRAM. Right now Im looking to upgrade to either GTX780 Ti which has 3Gb or Geforce Titan which has 6Gb vRAM.

6 gigabytes would be ideal but it`s $1000 for a single Titan.

There`s also “workstation” class rendering cards called Tesla from Nvidia. But Titan provides almost the same speed in Blender for far less money spent. Entry level Tesla costs more than $2000.

Thanks for the information. I’ve been looking and comparing the GPU and you’re right. Maybe I’ll buy a Titan. The problem is the price. Very expensive nowadays.
Will I have compatibility issues between the titan and the Quadro?

Shouldnt be any problems as both are Nvidia cards but Im not 100% sure. You can always ask Nvidia support guys.

Trying to find the thread, However I did see a post on this site about the Titan card not yet using the full 6Gb in rendering. something about the way the cores are “dedicated” to specific functions?

Found the post, looks like Bloodworks already knew about it in 2.68 but there was nothing else mentioned. You ever get an answer on a fix?

Hi Kirk_Wendel, this was a bug in 2.68, this is solved in 2.69.

Cheers, mib.

Fantastic! Thanks.