Does Cycle already comptible with Geforce 6xx ?

Hi all,

Like the thread’s title, Does cycle already compatible with Geforce 6xx ? I bought a zotac 670 4gb ( I’m waiting it ) and I would like to know if cycle can already get benefits of it ( and run on it right now on blender 2.63a ) and if the card will be use at 100% or not for the moment?

Thanks in advanced,


Hi. 2.63a don´t support GTX 600 series, 2.64 does.
You have to wait until end of june or try daily builds from
For linux and osx cuda toolkit 4.2 is needed, on windows it is all inluded in the driver.

Cheers, mib.

I haven’t seen any real confirmation but it seems to be common knowledge around here that the 6xx-series does not provide any significant advantage over the 5xx-series in Blender (though obviously your 4gb card will be very appreciated when rendering with Cycles on the GPU)

6xx-series does not provide any significant advantage over the 5xx-series in Blender

I read only about big disadvantage, up to “completely unusable” due limited computational capabilities in that series, 580 still best single card for that.

Who claims they are completely unusable? The Kepler GPUs are a bit slower than the Fermi (in Cycles) and there might still be some bugs in kernels compiled for these cards, but they generally do work.

Yes I know that in somes cases Kepler are a bit slower in GPU, the cause is that CUDA 4.2 is not optimised for Kepler ( same pb in Octane render ). CUDA 5 is just released so need to wait new compilation with CUDA 5 to get 100% of the CG card.

@Storm_st : Kepler is more more powerfull than fermi GPU, more cuda unit and co, but need cuda 5 to be exploited at 100%.

Thanks for your answers, wait the new build :wink:


I agree that we still have to wait for the full support for kepler to see their real performances.

However the cores of the kepler architecture are not the same as the fermi one. So the fact that they have more cuda cores could not mean that they are more powerful for gpu rendering.

can be useful to wait and see how 680 perform with CUDA Toolkit 5.0 … hyperQ or what it’s called to multi-thread from each CPU core to put work on the GPU, and that dynamic adaption mesh thingy. guess that’s better for GPU smoke, not rendering. since it’s uniform over the result, all pixels need to be calculated w/o optimization that is.

Interesting video. I saw a demo from the GTC Keynote 2012 that’s showed a real time raytracing fluids sampled. Seems to be a good unit for rendering… I understand that’s possible they used a directx language to do it, but it’s still a good example of raytracing computation (in video start at 7min20).

Moved from “General Forums > Blender and CG Discussions” to “Support > Technical Support”

Tesla K20 - maybe, but not “game cards” like 680, where computational capabilities was sacrified for game purpose. You cannot fix hardware limitations by any driver, there are not enough or crippled hardware CU units on chip.