2xGTX680 4GB vs 2xGTX580 4gb vs Titan 6Gb for Cycles

So Im getting into architectural visualization and the single GTX670 2Gb is letting me down both by memory and by rendering speeds so Iam considering an upgrade.

At the moment it seems that the cards mentioned in title are the best options for comfortable 3D work. But because I`am fairly new to Blender software I would like to know which card/cards are the best option for a few years down the road.

I looked at 2.61 benchmark thread, but its quite confusing as GTX460 are placed really high and what not so Im asking here.

P.S. Will I gain anything by upgrading my CPU? Right now I use i5-2500k.

Moved from “General Forums > Blender and CG Discussions” to “Support > Technical Support”

I would recommend 2X580’s. They kind of nerfed their cards for the 6 series (not sure about titan).

If memory concerns are your problem then Titan is the only way to go. It’s also the fastest card by a decent margin in Cycles tests so far. The only issue is the initial investment.

Actually I think Ill get the Titan for the following reasons - Cant get GTX580 3Gb version here anymore, the few models that are still here costs more than a Titan which is absurd. GTX580 is 2 years old, I dont want to invest in fairly old tech. And I dont have enough space in my PC case for 2 GTX580 as those are 3slot cards. Also 3G is far too little for my needs, need at least 4Gb

Therefore I`ll sell my GTX670 and get Titan which performs more or less like 2 GTX680 cards.

I would go for 1 GTX and a really good i7 CPU, this is the best setup for now and for the future.

In general terms modern drivers from both AMD and Nvidia do not perform really well with 2 or more GPU, there are limits to the use of multiple GPU at the same time that often make this kind of setup basically useless and only expensive.

Remember that to be able to push a GPU to its own limits you need a good CPU and a really good memory controller, with an i5 you are probably limited with any high end GTX.

There is also another problem to face, right now the GPU rendering it’s not a replacement for CPU rendering, in any case, with any software, often you will face the limits of GPU rendering and you will be forced to switch to CPU rendering, with this kind of hardware you will be good on both side.

You should also consider that a good amount of RAM is expensive, and you also need a considerable amount of RAM for the entire system.

That might be true for DirectX/OpenGL rendering, but the number of GPU’s generally scales 100% in relation to performance when doing compute tasks, as each GPU operates independent to the other.

any book about the x86 architecture can prove that you are wrong, even more, the drivers are managing your “tasks”, so you are considering the OS and all the software-related components, I haven’t see something even worth buying yet, the performances of this so called “high end” GPU are horrible considering the price and the setup with multiple GPUs.

Remember that CUDA is just a framework, on the GPU there are always the same computational unit in every GPU that you can buy, the only real thing that changes is the number of this units/cuda cores/shader unit, how you wanna call this.

Does a CPU bottleneck GPU computing? As I understand all the work is done only by video card/cards so there shouldn`t be any bottlenecks in rendering with Cycles. The Titan or 2xGTX680 will get bottlenecked by i5 in CPU intensive games, but gaming is not the reason I want to upgrade.

You won’t get bottlenecked for rendering with an i5 or any motherboard made in the last 6 years. All data is copied directly to the GPU prior to render, the CPU is handling next to nothing at render time. Your bigger concern should be future support. Brecht has already expressed a desire to back off on GPU development until tools and hardware can catch up to his vision. There is also heavy debate about whether or not GPUs and CPUs will both become more multipurpose in their roles in the coming years. Many developers are seeing this as a sign that they should relax on trying to create a production-ready GPU renderer as all of their work could be useless in a couple of years.

yeah … even assuming that this is true i have a question for you: what makes your GPU work ?

I’ve heard that the Titan is the way to go from many a source now. But you might also want to consider upgrading processor and/or ram. You can nvr have too much! Also, the reason the GTX460 is so high on the cycles benchmark list is that it is the card that is used the most recently anyways, due to both somewhat low cost vs very hood performance with cycles.

No need to assume, it is true! With Cycles multi GPU rendering, each GPU is acting like a separate entity, almost like network rendering, and the samples produced are combined together.

The CPU (when using GPU rendering) is more like a manager at a store, they tell the workers what to do, could be 1, 2 or more, but don’t really do any work themselves.

Instead of assuming that you’re the only correct person out of a sea of educated people contradicting you, you should do some research about GPU path tracing. It is almost entirely self-contained to the GPU, and once the BVH is copied to device memory the CPU more or less sits idle aside from syncing the image buffer. The data transfer itself for GPU rendering is very small as well. You could get away with having a PCIe-x2 bus and still not notice any slowdown since all render calculations are occurring on the GPU and it’s more or less operating as its own entity at that point.

Tough to decide. GTX580 is damn old and only up to 3Gb, GTX680`s ability to GPU compute is gimped and the investment needed for Titan is quite substantial.

Who knows about the 700 series Nvidia cards? Will those be a Kepler refresh or the next architecture - Maxvell?

What are people’s opinions on having two Titans in your machine? Is it a case of having two powerhorses working together like a couple of foul beasts giving maximum render power with minimum render times or is it a case of one too many where one is doing all the work and the other is a wasted, lazy fat slob not doing much?

I ask because I have spotted a system that has two of these brazen hussies as part of a package deal over at the overclockers gaff.

Das link: Overclockers

It costs a fair few shiny limy pounds, but it’s not entirely out of my budget. The only issue that seems obvious to me is that it’s heavily advertised for game playing and not a smidgen of info for 3d rendering.

Two cards is NOT the same as SLI. SLI in fact ruins Cycles performance. If coded correctly GPU compute will scale almost linearly with additional GPUS unlike raster capabilities. See Octane for proof.

Buy Titan if you absolutely need to but in general is better to let competition intensify as AMD has no real say in Cycles yet. Also there are many features not supported on CPU yet not to mention GPU. And there is a Phi card from Intel that looks sweet. Probably best if you don’t jump ship just yet.

The titan is the one to get.

6gb of ram for a starter. It is supposed to be possible, when adding a second titan, for the memory of both cards to be used.

Its performance should improve a little more perhaps as new drivers are released.

I would, however, recommend waiting.

I ordered a Titan, then cancelled only just in time. (Lady I spoke to had to contact the dispatch warehouse to see if it had been picked).

Why? Well, I’m of the opinion, that cards will make an appearance with more than 6gig, and that 6gig will become more the norm.

Why? Well, the PS4 has 8gb of GPU on-board memory. PC GFX Cards will catch up reasonably quickly - imo.

I’m a long term ATI fan. I’ve discounted amd for my next card. If by some miracle they’ve fixed their issues, I may reconsider.

I also agree you should go with the Titan. The RAM is more important than the slight performance edge you might(!) get from two GPUs.

A lot of real world benchmarks can prove he is right. The scaling is close to 100% for many GPGPU tasks. You keep repeating a lot of half-truths and semi-related information, but you don’t really understand these things. We should put a warning label on you, telling people that what you say may sound right, but isn’t necessarily true (or applicable).