Really need sound advice on graphic cards

I was thinking of getting a new graphic card for blender cycles, either a gt440 or amd 6670. I am aware that the 6670 has much more raw power than nvidia’s gt440, but currently cycles support cuda more than opencl. So I am at a loss for which card is better for cycles and at the same time, resonable for gaming and video editing. This two cards are the one that I narrowed down, due to budget constrains and not having a powerful enough psu. Any benchmark is welcome, any suggestion is too. The problem is that my 9500gt is not supported, and I am afraid that if i buy the amd 6670 (opencl 1.1), it may be phased out of support for blender cycles even before a good driver comes out (cycles require opencl version higher than 1.1…)

Hi. don´t play with older lowend cards for cycles, it is waiste your money. Opencl < 1.3 is not supported and never will be.
The smallest card is GTX 550Ti 2GB for about 100.00 €, or AMD in the same price segment.
GTX 550 Ti DDR5 2GB VRAM is very handy for bigger scenes and big textures and has opencl 2.1.
Dont`t toss your money out of the window, please. :slight_smile:

Cheers, mib.

Opencl 1.1 onwards is supported for the current builds, it is nvidia cuda that is not supported for 1.3<. GT 440 has cuda 2.1 if that is what you are talking about. AMD 6670 supports opencl 1.1 . I am asking about the performance difference between gt440 on cuda 2.1 vs AMD 6670 on opencl 1.1. I know that GTX 550 is way better than gt440, but the difference is not as big in performance to amd 6670 on some benchmarks. Blender has lesser support for opencl for now, hence cuda graphic cards tend to perform better. However amd 6670 is more powerful than gt440, so I want to know if amd 6670 still performs better in blender cycles than gt 440 even though it receives less optimization and support for now.

For power consumption reasons, I want to stay away from GTX550Ti.

OpenCL on AMD is joke for unusual big program like Cycles, may be it will be better with latest GCN (HD7000) cards when fixed OPenCL driver hit public, but for now is very clear - only NVIDIA > 550, and with as much RAM as possible, especially if you have big scene meshes. You just cannot render on GPU if you hit some memory limit (very depend on scene, no easy guessing from polygon number). Also, wait 3 days, new NVIDIA GPU (Kepler) will be shown, maybe some price cuts follow.

I am asking about the performance difference between gt440 on cuda 2.1 vs AMD 6670 on opencl 1.1.

It like 0 vs infinity, current state Cycles not work at all on any AMD (only shaderless cut version)

@storm_st, what do you mean by not being able to use gpu to render if I hit some memory limit?

Opencl < 1.3 is not supported and never will be.

There is no OpenCL 1.3. The latest supported version of OpenCL on GPUs is OpenCL 1.1 and the latest specification is 1.2. You must be mixing something up.

Blender has lesser support for opencl for now

Blender supports OpenCL, it is the AMD drivers that do not support Cycles. OpenCL actually works on NVIDIA hardware.

what do you mean by not being able to use gpu to render if I hit some memory limit?

If your scene does not fit into the memory on the GPU, you cannot use the GPU to render at all. Therefore, the more RAM the merrier.

On topic: Wait until NVIDIA releases the new generation of their GPUs, prices will drop. Get at least a GTX460/560 if you want the best out of your money. Don’t buy low-end, they are bad price/performance-wise.

Meaning? The more the normal ram the better or the more the ram on the graphics card the better, to allow gpu render?

@Zalamanda, I mixed opencl with Cuda Compute Capability.
@swsw, half power consumption > half price > double render time.
I didn’t mean to sound rude or so but I work with a GTX 260 and it is far to slow to “Work” with cycles.

Cheers, mib.

For example your scene needs 1.5GB Ram. You could it render with your CPU because you have enough Ram but you couldn’t render with GPU because the GPU has just 1GB. You can’t use “normal” ram additional.
If you buy a GPU for Cycles get as much VRam as you can get.

Unless you REALLY need a new graphics card right NOW! I’d recommend holding off for now.
I too am in a similar boat as you, and started a thread about it myself. Responses to it, and my own research, has led me to hold off for a month or two myself.
New hardware on the horizon, advances with Cycles/AMD hardware … waiting seems to be the thing right now.
… and I’m ‘dealing with’ an old amd x600 256meg card right now, so I REALLY want a new one …
:cool:

Hmm… How do you know how much RAM a render will take, though? Is it an ‘oh, cool, it worked’ kinda thing or can you actually check this in advance?

Also… Even though I’m not absolutely sure on this, it seems you can tell CUDA to use ‘shared memory’, that is system RAM, when coding, so this might actually be something Brecht & co. actually can bypass…

And as I have no clue about how much RAM a render takes there’s hart to speculate what having 512Mb, 1- or 2Gb VRAM actually mean, but for later I guess this is something that really needs to be solved for rendering volumetrics - which I know from experience is really memory-consuming… :stuck_out_tongue:

I have a Geforce560 2GB my monitor is 1920x1200 for cycles
and when rendering the view with a pretty complex interior scene
my VRam usage goes up to about 1.2-1.4GB
And this is without doing fluids or anything else… just regular things…

So… Definitely get at least 2GB Card…
Also be careful when looking at Vram Only as Some Cards have 3GB etc… However
Some cards have 2 GPU’s on one card
so… thats 3GB divided by 2 so its only 1.5GB available per GPU.

A Geforce 580(is a single GPU) and it has a 3GB versions…
(though they are about $529, at newegg. not that cost effective.
When you can get a 560 2GB for $219 with rebate, and speed differences in Cycles aren’t that much)

you can measure your VRAM usage with a Desktop Gadget for Windows7 called “GPU Meter”.

(i would also suggest waiting for the new Geforce cards, im happy with my 560,
but wish i would have waited i bet the new ones will be awesome)

Look at the “GPU Meter” Vram usage while rendering an Interior Scene…
(this is a Cycles viewport render 1920x1200)
this is with a medium HDRI env image, but not all the textures in the scene are setup yet…
so this scene will probably go up to about 1.7GB i guess.

1.3GB
http://img220.imageshack.us/img220/971/cpugpumeter.jpg

I’m very happy with EVGA GeForce GTX580-classified.

The fan does race if I push things though I understand it’s a card which can handle it. While the card is the most expensive part of my entire computer, it is SLI compliant meaning that if I bought another one I would be able to daisychain them in order to even further speed up the jobs. On the test render of another thread, this card had pushed half hour render times (CPU in cycles) to 45 seconds (GPU of the card itself).

This is the kind of question where everyone will have their own favourites.

P.S. Well done for spotting the need for a decent PSU. Oh the stories I could tell.

Nice tip re. the GPU meter. Tnx! :smiley:

So, anyone got a clue about coding CUDA and how the memory allocation/sharing works and if this is applicable in regards to Cycles and GPU-rendering? To be honest, if Cycles is going to be locked to VRAM memory, 95% of ‘everybody’ isn’t going to be able to render bigger scenes than the standard <1000Mb’s… I mean, even today, 2- & 3Gb cards isn’t something you just go out and buy, we’re talking $5-600 cards - I probably couldn’t get one locally here in Gothenburg, Sweden, like tomorrow, I’d bet I’d have to order one…

The thing that CYCles needs to work on is “Building BVH” and making that Multi-threaded…
If you noticed i have a 8-Core AMD Cpu and during the Build BHV process it will only use 1-Core…
this needs to be fixed and optimized, it should be able to be smart enough,
while one cpu is building one, that it can send the next package to the next idle cpu core etc.

Blender knows how many cores the computer has(Blender Internal Auto-Detect correctly detects 8 cores etc… )
Why the BHV, only uses one core, per asset is beyond me… (at least people can now check “Cache BVH”)
And that will save you some time if you dont change the underlying geometry.

Thank you guys, cleared up a lot. Shall hold until 600 series arrives.

Huh…
i just did a little test, playing around rendering just a basic sphere, no modifiers applied or anything
i did do a little modeling to it but the object only has 284 faces.
Scene has groundplane, and 2 plane emission lamps.

with the background set to 0.000 and NO HDRI
anyhow when i render the viewport maximized size
it uses 1.3-1.4GB hmmm…
So… if i make the viewport a little less than 1/4 of the size of the screen and render that…
then it uses 1GB hmmm… weird…
a simple scene uses as much vram as that “complex” interior scene with lots of objects and HDR MAP.

I have a GTX 465 and don’t know very much tech-wise but I will say it does consume more power. Also compared to CPU, this thing is lightning fast. What would take a scene 20min. to 30min. to render, GPU does it in just a couple seconds. I did find, however, that when in constant render in the viewport, within a minute or two the GPU had already gone through 5 or 6 thousand passes, so I have to set render limits in the preview section. Another thing is the power consumption. When I got the card, I also had to switch the power supply from 300w to 650w which is about 100w more than I need for the card. the 465 was $300 and the power supply was another $100 for those who need to upgrade. Well worth the money.

George

I would wait for the next generation Nvidia cards to come out before considering purchasing a new GPU:

  • Because they will possibly drive the AMD 7xxx series price down
  • They might (although it’s a remote possibility) have less crippled OpenGL capabilities than Nvidia Geforce 4xx and 5xx series cards.