Nvidia vs. ATI - The one millionth thread.

Before I start there are two things I want to mention.
-I understand there are probably a million nvidia vs ATI topics on the forum, but I want an up to date opinion.
-No flame war intended.

I have always a supporter of nvidia and a hater of ATI, and I understand ATI used to have lousy compatibility with blender. But recently I have been begininning to sway towards ATI, purely for the value for money you get. I know Nvidia has CUDA and ATI has OpenCL - less widely used. I just think they have come up in the world a bit since I last looked.

I may be wrong - please correct me.
TS

Depends, if you are fine with only CPU based rendering… I would go with AMD (amd bought ati btw) cards… they have better viewport performance then NVIDIA cards in blender… however, Cycles at the moment is only working on nvidia cards… supposidly amd are trying to get it working, but nothing so far… it is a driver problem not a programming problem…

Nvidia has opencl aswell… as the name suggests its an open programming language, not like cuda. Cycles works under opencl with nvidia cards fine.

wanna render in blender with GPU today? —> NVIDIA
wanna do more OpenCL compositing, cheaper and faster? ----> ATi

Radeon has better Compute performance, to bad the red camp doesn’t spend as much doe on the programming scene and nice api docs and good support. nvidia does. hence there’s more papers on using CUDA computing …

I’ve always been in the nvidia corner, and apple thinks ATi is soo good they put it in all computers, so … im saving for a PC/LNX with atleast 2 nVidia cards, preferably older GTX 580. that does work. today w/ Cycles.

So try find a 2nd hand 580 is my tip. probably cheaper or at same cost as a top of the line brand new ATi card.

By today’s standards, ATI generally has more solid, well built, fast GPUs. They lack in the area of driver support and software based perks.

Nvidia has soso build quality but excels in both driver support and software development. There are currently a lot of perks to going with Nvidia, including but not limited to 3d vision, physX & CUDA.

I own and have continued to own both. Depending on what you use the card for, one is more suitable than the other. Though I would argue that nvidia cards will fit every need, where as ATI would be fine for basic computer work and gaming (if you do not care about physX).

My personal favorite right now is nvidia cards made by EVGA with the life time warranty support (based on model number you buy). While some of the cheaper taiwanese and chinese companies can often push out higher performing nvidia cards, they are also more prone to breaking. Getting support from them is even harder. EVGA is perhaps the most solid american based company for developing Nvidia gpu’s and the life time warranty pretty much says the rest.

For pure opengl performance in the viewport in Blender, ATI works best. For Nvidia cards you will have to turn off double-sided lighting for each mesh object, otherwise performance is on par with a three or four generation older Nvidia card, even if you install a high-end consumer nvidia card. And some users report hardly any improvement at all, even with this turned off for mesh objects.

But as per usual, nothing is clear-cut: with complex scenes the selection lag in Blender’s viewport becomes absolutely dismal with ATI: with a 1~2 million poly scene a lag of up to a minute or more may occur. Unless a Blender build with the occlusion patch is used (at least, on Windows - no idea if this is also the case in Linux. Anyone knows?) - in that case it is very fast and immediate, even on Nvidia cards selection speed improves quite a bit.

If Cuda support is required, Nvidia is the way to go. But OpenCL is (potentially) faster on ATI/AMD - and it is in certain applications. But again, it depends. For example, all 3d applications I have tested on my 7970 work really well with opengl. Softimage’s viewport is unbelievably fast on it. Except for Houdini: I’ve experienced crashes and abysmal opengl performance (3~4fps 2million poly object). They’re working on it, though.
But no gpu-accelerated Cycles for me.

At this point I would dare to state that there is NO ideal graphics card option that works optimal for all applications, at least not at affordable prices.

Your best bet is to either:

  • base your purchase on the applications you use;
  • or buy two mid to high-end graphics cards; one ATI/AMD for great opengl performance in Blender and other apps, one consumer NVIDIA card for Cuda support. Install both in the same machine.

Some people report that they can’t tell the difference. I don’t think anyone has conducted the necessary tests, to ensure that OpenGL performance is really better on AMD GPUs, if the double-sided issue is accounted for.

But as per usual, nothing is clear-cut: with complex scenes the selection lag in Blender’s viewport becomes absolutely dismal with ATI: with a 1~2 million poly scene a lag of up to a minute or more may occur.

Are you sure this issue is worse on AMD cards? If so, I don’t understand why you would recommend an AMD card for the viewport. As it stands, for Blender I’d still recommend an NVIDIA GPU, you just have to be aware of the Double-Sided issue.

This would settle a lot of confusion. I’ve searched high and low for information online but none of them mention Blender.

Even something as simple as recording your average frame rate when pressing alt-A on the same high-poly model would be really helpful.

This article seems to benchmark performance in that way…

Besides, I don’t need to see reliable numbers. I just want to see numbers. That would at least be better than the scattered anecdotes we have on this forum right now…

NVIDIA vs ATI I think is one the one side a philosophical question.

However you can also approach it from a practical standpoint. Fact is openCL rendering will not be as fast as CUDA in the near future. Thus there is no point in going with ATI when you are eye balling with using openCL instead of CUDA and wish to get the same speed. Sadly NVIDIA is currently the only answer here.

And in addition whats the problem in spending 200$ on a card from NVIDIA that enables you to use GPU rendering with Cycles right now and maybe in 2 or 3 years when openCL might be better, you can replace that outdated card with what is usable at that point in time.

Personally, having the otherwise crappy geforce 405, turning on VBO, off outlines and double sided shading has probably doubled my viewport speed - quite a revelation :slight_smile:

One thing that hasn’t been mentioned is that other software may start adopting opencl. For example photoshop.

Interesting read about opencl and openGL and some benchmarks