AMD vs nvidia vs Intel viewport comparison under linux

Hello,
I was recently purchasing a new graphics card and wanted to share my experience comparing Intel, AMD and nvidia new architectures.
My system is intel i7-3770 based and for the past year I was using integrated HD 4000 graphics, which worked fine for middle sized scenes with one problem - slow selection (GL_select problem described in other threads). Recently I started to work with bigger scenes (1.5 million faces) and although I could mange to handle slower viewport by turning layers off, the selection became unbearable (sometimes 5 seconds lag). So I decided to buy discreet graphics card.
I first bought AMD card, but had some issues with driver (not with blender though - Luxrender UI, DraftSight with composited desktop and some overscanning problems), so I replaced it with an nvidia card. I am using linux system (openSUSE 12.3, kernel 3.7, GNOME 3.6 desktop, Blender 2.66), so I don’t know how much of this comparison can be translated to Windows world, but proprietary drivers seem to work at the same speeds in linux and windows for most game benchmarks.
I was surprised to see that AMD 7750 card ($120) had the same performance in Blender viewport as the nvidia GT650 Ti card ($180), which is roughly 1/3 more expensive. My test scene is very simple benchmark (3.8 million polygons - subdivison surface used; I just run animation and record minimum frames per second).


Benmchmark results:
Intel HD 4000:
wireframe…20 fps
shaded…2.6 fps

AMD 7750:
wireframe…50 fps
shaded…7.7 fps

nvidia 650Ti:
wireframe…50 fps
shaded…6.7 fps


So both cards heve roughly 2.5x performance of HD 4000. Subjectively those two cards seeme the same when working on a real scene (nvidia maybe has a slight edge). Selection of objects seems a little faster with nvidia card (but both are usable, unlike intel).
My conclusion is that AMD Southen Islands architecutre has much better performance in Blender viewport than nvidia Kepler architecture under linux. Other problems with fglrx driver prevent me from using AMD card. I had no problems with Blender.


My system:
Intel i7-3770, 16GB RAM at 1600MHz

Tested graphic cards:
Intel HD 4000; 16 execution units at 650, turbo to 1150Mhx; driver - intel OSS Mesa 8.0

ASUS HD7750-T-1GD5: 512 stream processors at 900MHz; 128bit memory bus; 1GB GDDR5 at 4600MHz Memory; $120; driver - fglrx 12.104

ASUS GTX650TI-OC-2GD5: 768 stream processors at 928MHz; 128bit memory bus; 2GB GDDR5 at 5400MHz; $180; driver - nvidia 319.17

Hope this helps,

Pol

Adding a sub-d modifier throws the whole comparison off, because that is mainly cpu dependent - it should either be applied or turned off. The equal 50fps also makes me suspicious of this test.

According to the passmark test, the numbers are:
hd 4000: 479 points
AMD 7750: 1579 points
Nvidia 650 ti: 2682 points.

From this test the hd 4000 is much slower than either one, and the 650 outperforms the 7750 by almost a factor of two.
So it looks to me as if your test for the higher end cards were cpu limited, or driver limited.

And did you turn off double-sided for the mesh? If not, it will have a detrimental impact on the nvidia card’s performance.

Conclusion: we cannot conclude anything based on this test. Too many unknowns, and the sub-d modifier should be turned off.

Can you post the file you used to test?

Speaking of which, what’s the reason subsurf performs so poorly when at the bottom of the stack? Shouldn’t there be a ‘static’ switch somewhere that tells Blender it’s ok to just load it up as a VBO and forget about it? Which I understand is what happens when it’s not at the bottom of the stack. It’s annoying when I have to add a useless modifier to every object in the scene.

There is the possibility for modifiers to override the drawing mechanism, and the Subsurf modifier does that. It has its own drawing routine that predates VBO support in Blender, I believe. There are still problems with VBOs under certain cases (or so I was told, I forgot which cases), so they aren’t enabled by default, either. If you don’t use VBOs, but enable “optimal display” in the subsurf modifier, it will be a bit faster than the default drawing routine (i.e. as if you had another modifier below it).

I see, thanks for the explanation. I guess there won’t be any improvements to it now that OpenSubdiv is on the horizon. Oh well.

Thanks to all of you for good advice. Display tools add-on seems like a must now.

The benchmark I did is nothing scientific, it’s just a simple test I created two years ago when I wanted to compare two systems. I didn’t want to change the benchmark, because I have data from more systems, most of them very low end. But it seems for these modern middle-low end cards this benchmark is really useless. The file can be downloaded here , but I doubt it would be useful to anyone.

I tried to do other tests, now just with GT650Ti, since I don’t have AMD 7750 anymore.

SHADED:
with subdivision modifier - 6.7
*optimal display was on all the time, but it makes very little difference when it’s off
*VBO turned on, no difference when turned off

with subdivision modifier applied, VBO on - 7.5
with subdivision modifier applied, VBO off - 2.7

with subdivision modifier applied, VBO on, double sided enabled - 7.5
with subdivision modifier applied, VBO on, double sided disabled - 30

WIREFRAME:
as soon as I apply subdivision modifier, fps seem to hit 60 with this scene- which is the frequency of the monitor, so I think it syncs to that.

My earlier subjective perception of equality between 7750 and 650Ti handling my work scene was probably right, because double sided shading was on in the whole scene. When I turn it off, the scene gets much faster with 650Ti.
Good tip with redraw timer, didn’t know about that one.