Desktop Build

Hey guys, with the money I received for my graduation party I’ve decided to go ahead and make my PC purchase.

–Check it out here: http://pcpartpicker.com/p/7bWd

But before I do, I just wanted to get a more educated opinion on the build before I go ahead and do so.

I’ve decided that I’m going to go ahead and try to get the GeForce GTX 680, its close to twice the price of my last card I was thinking about getting, but from what I’ve been learning about it, I think it is worth it.

I asked awhile back if I should invest in the Intel i7, and after getting some advice I’ve decided to go that route as well.

Also, do you think that 650W power supply will be enough for the computer to run? I did some homework on it, but wasn’t completely sure if I did it correct.

This is going to be my first build, so any advice for a first time builder, like things to remember and such would also be welcome.

Thanks for your time, and I look forward to your replies.

-Sean-

Me, I’d save the $600 from the 680 and windoze and get dual 5x series cards and run linux. But I’m a lunatic like that :wink:

Hi Inferno, the GTX 680 don´t work for Cycles in 2.62 nor 2.63 (only with special builds from graphicall.org).
It is the real gamers card and slow on cuda and opencl performance.
Look for a mainbord with 2 or more PCIe 16/8 slots to work with 2 GFX cards.
As (jay) mention, go for GTX 500´s series.

Cheers, mib.

would buy a full cuda workstation http://www.nvidia.com/object/tesla_wtb.html

So would you recommend something closer to this? http://pcpartpicker.com/p/7dhr

I searched for a motherboard that allowed me to have dual video cards. If I did go with a config like this I would probably only invest in one video card right now and purchase a second later.

650W is enough? Have to take a look at these workstations. If you can extend the build at any time and the mb gives you enough power - its ok.

Do you know renderfarm? Or brender?

If you want a full workstation you have to pay muc more (~>4k/7k - 11k). Best solution you can get is currently nvidia tesla (cuda) but it is very expensive =)

Yeah, I’m not wanting to invest that much yet. This is just a personal computer I want for all my graphic design needs. I’ve been using our family computer for a while, and that works fine, but with cycles I’m wanting a better graphics card, and also would like the complete freedom of changing up my pc however I want.

The tesla looks awesome, but again, I am not going to spend that much on it. Also I found out that my power supply will be more than enough. The second build I added above will probably be the one I choose, minus one graphics card for right now.

Any last suggestions will be welcome. I’ll prolly order by the end of the week.

The best solution would be something thats slower in Cycles than a GeForce? Not really.
Cycles uses single precision computing, which is not capped in a GeForce. A GeForce has more CUDA cores and higher clocks, thus it’s a lot faster than a Tesla.
On the other hand the GeForce’s double precision is capped at 1/8th of the Tesla’s however it’s used for complex thermodynamic equasions, proteine folding and such, not for cycles.

The advantage though is that you get Teslas with 3-12GB VRAM, but I’d take a Quadro, it has the same CUDA cap like a GeForce, is slightly slower though due to lower clocks, has a lot of memory and is fast in OpenGL.

If you don’t use a tool that uses Quadro drivers and don’t need more than 3GB VRAM, buy a GeForce.

Hi Inferno, 650W is barely the minimum for two GTX 560Ti if it is a good stable one like the OCZ.
I would like to have such a system. :slight_smile:
It is always possible to undervolt or downclock the GFX cards a little without loose much power.
I have a GTX 260 10% undervolted from manufacturer with only 3% power loss for rendering.

Cheers, mib.

right, also most good current workstations for cuda use quadro =)

I am not sure if you’re agreeing or being sarcastic…

I quickly did the numbers as I am full of caffeine right now :smiley:
For the non believers, you can just calculate the perfomrance with simple math, or look up the GFLOP FMA SP performance:

First some basic conditions:

Actually the double precision performance always is at 1/2th the performance of single precision.
The GeForce has it’s SP to DP capped to 8:1 instead of this regular 2:1.

A GTX580 has 512 “CUDA cores” running with 1544 MHz, 3GB VRAM and costs around 400 Euro
A compareable Tesla C2050 has 448 “CUDA cores” running with 1150MHz, 3GB VRAM and costs around 1800 Euro

So each Streaming Multiprocessor of a GF110 contains 32SPs (Shader Processor or commonly Unified Shader) and 4SFUs (Special function untis)- for the GF114/116/118 its 48SPs and 8SFUs.
Each SP can do two SP FMA (Fused Multiply-Add) operations per clockcycle, a SFU can do up to four operations SF per clockcycle.

So the theoretical guesstimation according to nvidia is FLOPS FMA(SP) = Shaderfrequency[GHz]shadercount2


Now the math:

GTX580 = 1.5445122 = 1581.056 GFLOPS FMA
C2050 = 1.1504482 = 1030.400 GFlops FMA

The numbers nvidia specifies for the cards are:
The GTX580 has a 1581 GFLOPS FMA single precision performance.
The C2050 has a 1030 GFLOPS FMA single precision performance.

That single precision floating point perfomance is what Cycles, Octane or Luxrenders OpenCL uses.
Not hard to see that the Tesla only performs at 2/3 of the GeForce’s speed.

It’s different with double precision though.
As widely known and confirmed by nvidia the GeForce’s FMA(DP) is capped at 1/8th of the FMA(SP) while it’s the possible 1/2th for Teslas.

The C2050 has a 515 GFLOPS FMA double precision performance.
The GTX580 has a 192.6 GFLOPS FMA double precision performance.

So here the Tesla is more than twice as fast as the GeForce and on top of that made to work 24/7 in tight hot places with EEC memory doing it’s thing.

That’s something no renderer needs - only medical, (astro)physical or chemical calculations and generally scientific applications.

So recommending to buy a Tesla for rendering is saying to buy a card that costs 4,5 times more than a GeForce, but only performs at 2/3rd of it’s performance.

Studios like ILM don’t care though, they simply buy 10 TeslaBlades S2050 for instance, it offers 4221 GFLOPS per blade and allows the usage of a total of 12GB VRAM per blade (Teslas unified addressing) costing a total of ~150.000 Euro.
They’d render 24/7 with the power of almost 27GTX580 (while having 40C2050 in them)

So the most economic SP CUDA solution still is the GeForce.

And to conclude it, a Quadro 5000 for instance has the same 1/8th cap as the GeForce.
It has 352 “CUDA Cores” and runs with 1020MHz offering 3GB VRAM as well for 1700 Euro.
Thus it’s SP performance being ~718 GFLOPS FMA SP.

So while it’s the slowest card for CUDA compared to buying a Tesla, it has the superior OpenGL hardware operations a Tesla doesn’t have.

Tesla: Slow SP, Fastest DP, no special OpenGL
Quadro: Slowest SP, slowest DP, special OpenGL
GeForce: Fastest SP, slow DP, no special OpenGL

What’s missing?
Exactly a card that’s averagely good in all disciplines for endusers that can’t shell out money in the 5 digits.

No it was not sarcastic. Also great post you made.
I meant that there are workstations for rendering also nvidia recommends quadro (and tesla) for workstations: http://www.nvidia.com/object/workstation-solutions.html

but no normal geforce for players - there are diofferences why they use quadro for workstations.

http://www.nvidia.com/content/PDF/product-comparison/product-comparison-master-revised.pdf (quadro plex 7000 is currently missing there)
as you can see newer products have much more power, so it will not take long and there are quadro and tesla solutions faster than the current latest geforce.

but you are right =) you should start with a normal geforce as this is totally enough for rendering =)
wanted just mention that there are solutions mainly made for rendering =)

hoping that nvidia releases soon a card like you mention - good in all disciplines
lets see what nvidia has for us this year

Nope.
The QuadroPlex7000 is not missing, as it is no graphics card, but an external graphics blade containing 2 GPUs and costs 15.000 Euro.
And the fastest Quadro is the Q6000 with 448 cores and 1026MHz -> 919 GFLOPS FMA SP for 3500 Euro.

I am quite surprised all the time by people with their “more powerful” talk without doing any numbers.

Most powerful card of each line:
Quadro 6000 -> 3.08 Euro per GFLOP - 4.08 GFLOP per Watt
Tesla S2050 -> 1.74 Euro per GFLOP - 4.33 GFLOP per Watt
GeForce GTX580 -> 0.25 Euro per GFLOP - 6.48 GFLOP per Watt

So the “most powerful” card is the GTX580, offering most computation power per Euro and per Watt.

The “best card”, although not the fastest would be the GTX560Ti448:
448 Cores, 1644MHz, 150Watt, 200 Euro -> 1473 GFLOPS
-> 0.13 Euro per GFLOP - 9.82 GFLOP per Watt
huge disadvantage only 1.25GB VRAM. If that’s enough and you buy one of the boards supporting 8 PCIex16 you get the best price to performance ratio.

Speed is not all in my economy, it also has to be cost and power efficient.

This said, the GTX680 is the successor to the 460/560 and really aweful in CUDA.
The GK110 being the real successor to the 500 series should be a beast again, and unless there’s major speed capping from nvidia the GeForce once more will be the fastest for SP CUDA, espcially because it will be quite some time after the GeForce that the new Teslas and Quadros will hit the market - the professional market doesn’t buy new cards every year :wink:

In other words, even if the next-gen Tesla/Quadros are faster than the current GeForce, there’ll be a new GeForce already.

you are right =) but why do they market it, that quadro would be better for 3d workstations?

Because it is. The quadro drivers enable various hardware optimization on the GPU for OpenGL display.
3D-workstation != render-workstation.
You don’t do productive rendering on a 3D-Workstation. You do a preview render at most, and often enough even the preview is done on a remote machine dedicated to rendering - even Blender supports this with the Netrender addon.
You can work, press F12 for the preview and have 10 machines in the network doing the render while you continue to work on the 3D-Workstation and once the frame is rendered it’s being sent back to your 3D-Workstation.

The Quadro is for brute OpenGL power and for tools that support it via driver, like Unigraphics NX, Catia, Autocad, 3dsmax or Maya. With a Q6000 you can work with millions of polygons or with solid data shaded without any lag you can’t do that with a GeForce or a Tesla.

It’s obviously because Nvidia wants to milk the customers. Need OpenGL power for working? Buy a Quadro, is not so good for gaming though or for CUDA. Need CUDA DP power? Buy a Tesla, great for scientific application.
Want to play? Buy a GeForce, sucks though for productive OpenGL and DP CUDA as well as the memory amount for rendering.

Nvidia sees the crippling of one single GPU to GeForce, Quadro and Tesla as a service to the customer, if they would leave all the features enabled according to them their graphic card would be unpayable.
While I completely agree that they have to cover their R&D (and Nvidia is quite in some economic trouble) I don’t agree how they “split” their products.

We’re just unlucky that we need fast SP CUDA for rendering and fast productive OpenGL.
The best solution would be to buy a FireGL/Radeon/Quadro for the viewports OpenGL and a GeForce for CUDA rendering.

I’d pay the price for a card with fast SP CUDA and fast viewport OpenGL. But the hobbist market seens too small and the industry just buys hardware, they got different economic calculations.
Even as freelancer, if you look at it, if you work 5 days a week and earn let’s say 1500 euro per month but the investment of 4000 euro in a Quadro and a GeForce makes you render twice as fast and work 1/5th faster because the viewport is smooth beyond 500k polys, the 4000 euro should work back in quite fast.
On the other hand a cheap 400 Euro Quadro is a lot slower in Blender than a 400 Euro GeForce, although the GeForce has some hardware acceleration disabled slowing it down.

So you either got to invest, or go the cost and power efficient way. Nvidia is not offering anything between - which is sad, especially for Blender users.

Hey everyone, one quick question to add. The motherboard I’m getting can have a max of 32GB RAM in it. nd windows 7 can have like 192 or something like that. But what I’m wondering is if I were to max that out and get 32GB of RAM instead of 16, would that be to much. Because I thought I heard once that maxing out your RAM is hard on the motherboard.