How to run GTX Titan in Cycles? Patch necessary?

When you try to render on GPU there is a CUDA related error. It reminds me of when the GTX680 was new. What can be done about it?

Price is same that 690 but has 384 less processors?

5.6 Gflops of the 690 versus 4.4 Gflopts of the Titan. Don’t seems a good buy.

6GB vs 2 GB seems like a good buy. Anyway lets reserve this thread to the patch so that there can be benchmarks finally.

Moved from “General Forums > Latest News” to “Support > Technical Support”

690 is dual gpu, titan is single, should make a huge difference with cycles. The 600 series were all dumbed down in GPGPU whereas Titan was meant for GPGPU,

If someone could provide a patch we would know.

Guys, lets keep this thread ontopic? :wink: It’s not about whether the Titan is a good buy or not.

@anaho: I need more infos.

  1. What Blender Version + what OS?
  2. Do you have the CUDA Toolkit 5.0 installed?

Titan GPU should work with a recent SVN build (Blender 2.66 will not! work).
Also, you need the CUDA Toolkit 5.0 installed and you have to compile the sm_35 CUDA Kernel.

If you do all this and it still fails, post the exact error message.

I dont own one myself but an internet friend does benches - unfortunately he has no clue about blender afaik.
1.) Presumably WIN 7 64 - would be surprised if he uses anything else. Gave him a link to 2.66 64, however.
2.) Is the Toolkit automatically install with the latest drivers? ( Muss man es seperat installieren?) If yes, then probably no.
3.) If he has to build it himself ( also mit Cmake, oder?), there will be no benches. He doesnt have the time, nor the ability.

turns out there is already a build on graphicall :wink:

Does he need the Cuda Toolkit 5.0 anyway?

Scores 1.11 min in the BMW scene on standard tile size // GTX680 1.36.31
!!! Scores 0.32 min in single tile (940x540) // GTX680 1.31.83 !!!
http://www.forum-3dcenter.org/vbulle…08#post9676208
Looks like we got a new champion http://blenderartists.org/forum/images/smilies/sago/wink.gif

What about 512x512 and 512x256? Can your friend check it?

I cancelled my order and since then the gigabyte card has been increased slightly in price - darn it.

I would be curious to know if his card’s boost is: 876mz, or higher, such as http://www.scan.co.uk/products/6gb-evga-gtx-titan-superclocked-signature-28nm-pcie-30-(x16)-6008mhz-gddr5-gpu-876mhz-boost-928mhz-c

Nearly £120.00 difference in price at Scan.uk.

All Titan scores in the BMW-scene:
120x 67 = 1.11.xx
512x256 = 00:32.19
512x512 = 00:31.48
940x540 = 00.32.05

Those times rock for one card, well they rock anyway; I wonder if improved drivers from Nvidia will help too?

Danke, anaho! That card rulez!

I would like to know if he enabled the double float to make it really work with CUDA. It is a setting in the NVIDA driver.

It shouldn’t matter, as no renderer on the planet uses double floating point precision in its calculations as far as I know.

I read a review and it made a huge difference in how the CUDA cores were used. It wont throttle up as high but will make use of all the CUDA cores. According to the reviewers but they didnt do a blender test. I want to know for sure if that would make a difference for blender I believe that it could be quite big for blender.