Selecting which GPUs Cycles uses

I have a 4 GPU system:
2X evga GeForce GTX580 3GB
1x EVGA titan
1x GT 610

I put in the GT 610 because I wanted a small card to just handle display, and do cycles rendering on the other 3 cards.

I go to Blender and choose user preferences->system and the drop down menu only allows me to choose ONE card or ALL cards, including the GT 610.

so I go to NVidia control panel and go to 3d settings and under CUDA GPUs I check ONLY the 2 580’s and the TITAN- I restart and go to blender but the problem is still there… its ALL 4, including the gt610 OR ONE…

How do I get blender to ignore the GT610 when it comes to rendering?

Hi, I think you can´t.
It is on the todo list but not top priority, I fear.
May you can add a new profile for Blender only in Nv CPanel, I don´t know windows very well.

Cheers, mib.

Yes you can do it ARRELL!
Go to the Nvidia control panel (presuming you are working on windows)
Look for the 3D settings. Ther make a custom setting for blender and define the cuda GPUs to use (excluding the one used for the display)

Then back in blender you can setup your CUDA devices to use just the titan, the 580’s or all of them. At this point Blender will not see the 610 as a CUDA device.

Ok, so, thats how I thought it should work…
When I do that here is what happens:
I go to global settings, CUDA GPUs:

So the global is set the way I want…

I go to blender specific settings:

I click ok, then apply
the changes dont take and the panel looks like its thinking I want to only use the GT610

For some reason it wont accept the change- even if I tell it to just use the global settings it still says gt610 in the CUDA GPUs entry.

The workaround is to open 3 instances of blender, set each one to use a different gpu and have each render a different frame range of an animation, but that seems ugly… I was really hoping to use all 3 within one instance of blender and have the 610 just do display… thats why I got the 610…

on the Nvidia control panel try making the change in global settings and not in program settings…

See above… in the global settings only the TITAN and the 2 580’s are checked… so that seems like the way I want it, right?

is seems that there’s indeed something on the Nvidia control panel that doesn’t let you choose just what GPU to use when you make a specific setting just for blender…
but if you make the change as a global setting it does work:

I’ve used a similar setting in the past with no problems, with 3 fast cards and a slow one just for display and no CUDA. Right now I have an old GTX 275 and two 780s and only the 780s do the rendering… each card takes care of a tile, so they do work in tandem and the render times are indeed much faster than just one card.
When you get your system to work I’d like to know how much faster does the Titan render than the 580s…

Well, you can see my global settings above… the gt610 is not checked, but still shows up in blender… its very frustrating…

The titan is indeed faster… I just got the system back today with the titan added…

Only tested it on scenes that have a long build time before sending to gpu… so that messes up the stats in trying to compare… but from what I have seen in researching it the titan is 1.6 to 2 times faster than 580…

Also, when trying to have it do a live render viwport the 580s with 3Gb would crash on complex scenes… the titan
does not crash on the same scenes… that alone is wonderful… I jus Hate the idea of running 3 instances of blender in order to render my animations with all 3 cards.