Nvidia AMD professional GPUs

Hey ! So it seems that the Blender Foundation has received the long awaited tesla cards yesterday. http://mango.blender.org/production/a-tale-of-teslas/ and they have shown how cycles performs. There was also something about plans to improve cyles on opencl level.

What I understand is that currently Nvidia holds the advantage over AMD:

  1. Geforce cards - faster CUDA for cycles but less vRAM onboard ( GTX580-3GBVram)
  2. Quadro - better OPENGL for viewport performance in Blender. Not much else here concerning Blender.
  3. Tesla - uses optimized CUDA for brute rendering power and 2x the onboard video ram of a normal geforce ( ex: Tesla 2075 = 6B vRam but slower than GTX580)

So for AMD currently there is :

  1. Radeon HD series consumer graphics( the most powerfull beeing the 7970 in terms of Gflops/s single precision but currently slow and incomplete implementation for cycles rendering )
  2. FirePro - the Quadro counterpart ? only with better support for multiple monitors no ?
  3. Tesla counterpart from AMD ? I don’t know of such a product so far. Anyone know something else ?

Anyway so the question is what would you get if you had to build a custom Workstation ( for rendering in cycles ) ? there is no AMD alternative to TESLA so far is there ?
So opencl would catch up in performance to CUDA in a year maybe ? Is it worth buying the 7970 compare to a GTX580 asuming that I won’t upgrade for 2 years amaybe 3 ?

From what I hear opencl is a much more desired platform for rendering with opensurce software than CUDA for example. Would this mean we’ll see more optimizations from Blender part to opencl once the drivers mature ?

The other question would be regarding Linux performance ? It would seem that linux is more “conscious” of utilizing CPU than windows is. How about the GPU side ? What is better in linux in terms of drivers AMD or Nvidia ?

Finally my budget would be no more than 3000Euro what would be you’re suggestions ?

This is entirely dependant on your perspective.

Do you A) Wish to reward NVidia for their technical innovation and effort towards their products?
Do you B) Want to avoid any proprietary technology in the favour of an open source alternative?

You can of course run OpenCL on NVidia (AFAIK) so is there an issue between Nvidia, and ATI for the reason of avoiding Cuda.

The only other issue would be $$$, The question here is not the cheapest card, but the cheapest card that achieves what you want from it. Look at benchmarks and prices and compare the price per-performance.

Personally I bought an Nvidia for Cuda support, but also because I’ve had bad experiences with ATI cards and ease-of-use/stability in the past, and simply can’t support poor design.

Intel + Nvidia here exclusively for my work. Although still have an AMD/ATI running our HTPC on a daily basis.

p.s. $3000 Euro would get you a very high powered computer, may i suggest considering the sweet spot and getting two top end (but affordable) graphics cards, and saving the money you would have spent getting a super powerful one that is only 30% faster? i’ve not looked at tesla benchmarks so can’t be much help there.

There are motherboard capable of running 8 Nvidia cards. A couple of those filled with 3Gb GTX580’s…

Re. VRAM usage, there’s just gotta be a way around that. Best case, some kinda swapping. Worst case, splitting scenes into plates or layers, never exceeding 3Gb’s each.

In my experience, 3D is (almost-) always about workarounds in the end… ;D

If I had money to two Teslas, I better spend them to CPU/RAM, and get general solution. Do you know how difficult to implement float point unit, with proper cos/tan/log/exp and other functions? It still “black magic” with compromise in 2012, GPU vendors claimed to support that, but who now how it work, remember how long stayed FPU bug in Pentium before it discovered? No doubt Tesla very overpriced, it work good only for very special tasks like signal processing. Do you developing AWACS unit? No ? Then get general CPU, and save money.

Personally, I waiting for 2014+ hardware generation, at least AMD say that it will be almost general GPU, with proper multitasking, memory paging support, think about NUMA accelerator. Looking at current state of free software radeon drivers, and OpenCL, there is big move from OpenCL only to complete universal LLVM backend, that will run any language code.

I also want to say, before buying hardware specifically for rendering, check Amazon EC2 out. It’s pretty easy setting up a machine and cloning it into a virtual renderfarm and with scheduled usage, it’s not really that expensive. You’ll get a lot of rendering for the cost of a Nvidia GTX580 or alike… :slight_smile:

I would personally get a Tesla if I could afford it but I guess it is better to invest in a really good CPU or two and a GTX 580 with 2gb of VRAM instead.

If one would dream I can think of a combination of Tesla (for rendering) and Radeon HD 7970 (for viewport performance), I believe it’s possible to do that in Blender.

But weird the desktop Tesla being slower than the GTX580…

Hey I appreciate your input guys !
So the way I see it CPU rendering for unbiased solutions is very slow. Have tried rendering an interior test scene yesterday just for testing to see how much memory would get taken out of my 8 Gb of ram with a resolution of 7016x9934 ( A1 300DPI ). Peak mem = 2938M. That’s almost 3 GB. So a 3GB card would be barely cutting it without textures or many objects. Plus when I tried stopping the render with hitting ESC 3 times it kept on going for at least 1-2 minutes before stopping. This was on my Ci7. So very slow and unresponsive. Not saying that 3 GB isn’t enough I won’t be doing high res often but when I do ?
The only viable alternative would be 2xCPU on the motherboard. But that would be very expensive and I’m not sure it’s worth it.
Anyway 1500E goes only on the mobo + ram + SDD + case + cooling + monitor so that leaves only 1500 E for the bread and butter.
Keep the suggestions coming ! It really helps !
Also no one mentioned anything about TESLA alternatives by AMD. Are there any ?
What about opencl in the next year ? I’m thinking the BlenderFoundation is more centered on freeing the solutions used by people. So opencl not a better alternative to CUDA ?

When you press esc i believe it completes the pass that it is currently doing so the chances are it was just completing the pass and not actually being unresponsive/slow.

Nice topic (must upgrade my VGA card)

I find blender viewport really poor if compared with others commercial software.
I don’t know if using or not a quadro/Firepro for increase my viewport performance or buy another gaming card (actually I have a GTX460

So what other choice is there to stop it in the middle of rendering? there is no stop button that I know of plus it’s acting slow only on high resolution render on cpu on lower res it’s responsive.

OpenCL is probably at the bottom of Brecht’s TODO list.

Strongly disagree. It is not Brecht fault for AMD OpenCL compiler terrific quality. Man, it spend all 40+ min time, eating 14+ GB RAM in strstr() expression last time i tryed it in Dec 2011. Obviously, too smart AMD guy who make string parser part in compiler did not mind OpenCL program longer then 4KB, and use direct strstr() to parse program terms. It is so unprofessional i cannot remember worse example. NVIDIA compiler just work, as year before.

I’d wait for the next GeForce generation based on the GK110. People complain about the the 680, simply because nvidia mislabeled it in my opinion. It’s just an overhauled GTX560 TI, the successor to the GF114, the GK104. The card should have been named GeForce 660 IMO.

The GK104 offers not so amazing speed but already 4GB versions. So the GK110, with decend single precision performance and the possibility of 4GB+ might be a nice card for GPGPU.

And I really don’t know what’s the problem. If you need a fast card for GPGPU raytracing, you buy one of more fast GeForce, and in the next year you buy new ones. If you just render 100 frames per year it’s obviously a waste of money, then again, you don’t need it you just want it but you can also render with the CPU. And if you actually work commercially with the cards, they pay themselfes over again else you’re doing something wrong.

If you intend to keep the card longer I’d get a Quadro with the same amount of cores like a GeForce. This way you got the good SP GPGPU performance of a GeForce, a decent amount of memory and fast OpenGL.

If you intend to upgrade sooner again get one or more GTX580 3G for raytracing and OpenGL.

If you need faster OpenGL, get a cheap FireGL/Radeon for viewport and a GTX580 3G for raytracing. I clearly do not recommend a cheap Quadro for OpenGL, compared to a cheap Quadro a GeForce is still faster in OpenGL.

Personally I’d love to see a GeForce/Quadro hybrid.
I’d be willing to pay some bucks extra to have a card that’s fast in DX, OpenGL and SP GPGPU with a decent amount of memory.

Thank you for the input guys !

So the general ideea for the moment would be :

  1. GTX580 with 3Gb for fastest cycles rendering - was looking at the gainward phantom 3GB model - that’s 512Euro here. It sucks because the prices haven’t dropped yet with the gtx680. I’ll wait and see if they do !

  2. A powerful CPU for general purpose rendering in case scenes don’t fit in memory. What recommendations do you have here ? would a bulldozer work fast in linux ? it’s cheaper than intel. I would also Overclock it.

  3. Ati fire pro for OpenGl performance ? Are they better than quadro price/performace wise ? what models would you recommend ?

  4. 16 Gb ram would be more than enough I reckon.

  5. For the mobo would SR-2 be overkill ? I hear it’s great for overclocking. Also good for future upgrades ?

  6. So the case needs to be large. What do you guys think about Corsair CC800DW ? Coupled with a minimum 1000W PSU.

  7. In terms of cooling what do you think ? better water or air ?

if you have lots of money to spend… :evilgrin:

lets see what we can do…


ASUS Z9PE-D8 WS Dual LGA 2011
Intel C602 SATA 6Gb/s USB 3.0 SSI EEB Intel Motherboard
Dual 2011 socket, 4x PCI-E x16 Gen3, DDR3 2133
Now: $599.99


CPU’s X2

Intel Xeon E5-2687W Sandy Bridge-EP 3.1GHz
20MB L3 Cache LGA 2011 150W
8-Core Server Processor BX80621E52687W

New Intel Xeon Processor

$1,899.99 (Each)


a couple of these…



A couple SSD’s
A Large PSU… and Case and 32GB of Ram.

I think we could do it all for under $10,000 :stuck_out_tongue_winking_eye:

here is a review of the new Sandy Bridge Xeons…
they have Blender benchmarks as well.

here is a review

If we’re doing it like that :smiley: let’s throw in this http://www.caselabs.net/ and we could do much better :stuck_out_tongue:

Seriously now <3000E budget ! no more playing ! :smiley:

PS: Those xeons look tasty ! :smiley:

Or you could go this route and fill it with 4-16 Core Interlagos CPU’s 64cores :stuck_out_tongue:


Bulldozer sucks at floating point. Get a Phenom IIx6 or an i5 if you’re on a budget. You should absolutely get an SSD. I don’t think you need a pro-range GPU for opengl viewport if you’re just using blender, there’s not much benefit in that. If you are worried about “double-sided” performance on NV, get a secondary midrange AMD for display and you’ll be fine. Don’t buy expensive RAM, it doesn’t pay off. Air cooling is fine, but don’t use the stock CPU cooler (at least for AMD - those really suck. Intel coolers are somewhat acceptable), get a decent cooler for 20-30$.

If you intend to keep the card longer I’d get a Quadro with the same amount of cores like a GeForce. This way you got the good SP GPGPU performance of a GeForce, a decent amount of memory and fast OpenGL.

The Quadro equivalent of GTX-580 costs over 3000€ by itself. For that you can get can get like ten Phenom X6 render slaves. Or over 6000Hrs of dual-tesla spot instances on EC2.

Bulldozer sucks at floating point

Wasn’t this only a windows 7 issue. I mean win8 and never linux kernels would address this no ? I can’t remember where but there were some benchmarks done on linux with the bulldozer and it seems it’s doing much better than on win7.

One of the main issues many don’t know and Nvida don’t like to talk about is that GTX/home user cards are not designed to work at full load over long period of time, try to use them 24x7 at 100% load, withing few months they just blow up.

Quadro/Tesla cards are picked from the best quality parts from the silicon wafer after the Photolithography and running with less clock and temperature they could work 24x7 for years without break.