GTX Titan Blender Cycles Benchmark

I just found this link on sweclockers.com which shows some new data from the Titan’s performance in Blender.
http://www.sweclockers.com/recension/16541-nvidia-geforce-gtx-titan/13#pagehead


För att testa ren CUDA-prestanda används Blender, med stöd för att rendera bilder direkt på grafikprocessorer från Nvidia. Här visar Geforce GTX Titan åter styrka, och toppar tabellen framför GTX 680.

Translated from Swedish: To test the pure CUDA performance, we choose to use Blender, which can take advantage of the power from the GPU’s from Nvidia. Here the Geforce GTX Titan shows off its real strength, in which it beats the 680 in the table above. (I’m Danish so I’m not 100% that it’s 100% correct. Correct me if the translation is wrong.)

This is my first post ever on these forums lol.

I´d really love to see a benchmark made with Mike Pan´s test file dl.dropbox.com/u/1742071/1m/BM*W1M-MikePan.blend
The numbers above wouldn´t justify the extra 450€ you´d have to spend on a Titan over a GTX 680, except for the added 2GB of Vram.

The issue with the Titan seems that is uses double precision, which in effect gives quite a performance hit when using software (like Blender Cycles) which doesn’t support double precision. In the Island render, it is 15% faster than the GTX-680, which is nice, but hardly enough to warrant the currently insane pricing of the Titan. (I’m also curious about their benchmarking, as I would never had guesses that the GTX-680 beats the GTX-580 on that scene - it never does in any test I have done).

All in all I think it’s an interesting first look, and as a Norwegian I was able to read all the details, but I don’t find their tests entirely convincing (and the fact that they have mislabeled most of their result boxes doesn’t inspire confidence either).

If nothing else, I’m sure the launch of this card will drive up the sales of new PSUs, as a SLI config (which you should never ever use for Blender though) pulls over 730W, meaning that any PSU under 1000W will be pretty futile :wink:

Edit: I realized you can actually manually remove support for double precision in the settings, which will bump up the clock speed a little bit. I don’t know which setting they used for the rendering test though…

Any tests done with the titan, I’ve seen have shown the drivers aren’t working correctly for compute. Invariably they’ve failed completely.

I’ll reserve judgement until I see what those show, and hopefully a forum user will then buy a titan and post their results. :slight_smile:

The MASSIVE plus is the memory though; faster at rendering than other cards is always nice, but all gfx cards have limited memory.

If i’m not mistaken the titan allows for sharing memory between titan’s though, so a quad setup would rock /nod :stuck_out_tongue:

Regarding double precision, no it doesn’t run DP by default. On the contrary, you have to enable it, which downclocks the card. Still it’s great that it has that possibility. The DP capability is 1/3 of SP, compared to 1/8 on the GTX580 and 1/24 on the 6xx generation. Since the total amount of CUDA cores is significantly higher than either the 580s or the 680, it’s a massive7.6x increase compared to the 580, and 10x the DP capability of the 680. The numbers show that even when running with DP mode activated, and down clocked, it’s still faster either the 680 or 580. When running in SP mode, the Titan’s lead is 20% over the 680, which is a little disappointing considering it has 75% more CUDA cores. So it isn’t scaling linearly, even when comparing Kepler cards.

http://www.miikahweb.com/en/blender/svn-logs/commit/54710

This benchmark was done on Blender version that don’t support Titan new shader model. This thing was added 2 days ago and 2.66RC is 2 weeks old. Despite that I was expecting much more. Probably overpriced toy for snobs and nothing else. :confused:

To be clear, once again, the premium you’re paying isn’t for a huge speed increase, it’s for access to a huge pool of memory without having to spring for a Tesla card. If you can’t see why 6GB of vRAM with a powerful processor (or even more if memory sharing works in tandem with Cycles) would be easily worth a couple of grand to a small studio or freelancer, then you aren’t thinking hard enough.

I just noticed that Tom’s Hardware has a Blender benchmark across multiple nVidia GPUs:


If that URL doesn’t survive, it’s in their GTX 760 review, page 19 “CUDA Performance”

I just noticed that Tom’s Hardware has a Blender benchmark across multiple nVidia GPUs:

Ugh, looks like they kept their flawed benchmark results of the 780. The 780 has been independently benchmarked here, and it’s about as fast as the Titan. Tile Sizes must be properly adjusted!

Love to see that - do you have the link (couldn’t find it)?

Love to see that - do you have the link (couldn’t find it)?

I’m commenting on the link BlendyShannon posted, right above.

Now there’s also this new Tom’s NVidia Quadro comparison from 10 days ago:

Sadly the Blender version keeps being way old :[

Here’s some of my GTX 570 results:

200x200:


480x270:


Once the official 2.68 gets released, I will surely suggest Tom’s Hardware to make a rerun for every NVidia card they have :slight_smile: That would be golden.

Hi all!

I have now got 3 st GTX Titan and it render 4.11 times faster then my old GTX 590 (that card have got 2 GPU)
So I think it works lvery well, and now I also have got more memory.

//W

the guy above seems a wee bit sketchy, but i do however have 1 GTX TITAN, and although i cant remember the times off the top of my head, yes it is fast, but only equally as fast as some other high end GPU’s especially the 580 sli setup, which barely loses against the titan at half the price.
you are paying purely for the 6GB memory