Some might have noticed by now I am on a crucade against poor OpenGL performance and to determine what card could be considered “best” for Blender
This endevour started because I wanted to find out if you can run a Radeon and GeForce in one System and still use CUDA/OpenCL. You´ll read it in the final report.
Short: Yes, it works flawlessly (besides color management) and I tested it with Octane, SLG and Lux.
I am still testing OpenGL performance and in the end I´ll hand out a nice PDF report, but I hit a dead end, especially with Radeon cards and my knowledge about their settings, not to mention I only got one.
However, with my testscene (download below) I get current results (exclusive preview for my report): GTX 470: 76fps (FW270, W7x64, [email protected], 1920x1200)
GTX 470: 51fps (FW268, W7x64, [email protected], 1920x1200)
GF 9500GT: 67fps (FW270, W7x64, [email protected], 1600x1200)
Quadro FX 1800: 42fps (FW270, W7x64, Ci7-950, 1920x1080)
HD 5850: 41fps (C10.4, W7x64, [email protected], 1920x1200)
GTS 250: 36fps (FW268, W7x64, Phenom2 X4-940, 1920x1080)
HD 5850: 35fps (C10.4, W7x64, Phenom2 X4-940, 1920x1080)
HD 5850: 22fps (C10.4, W7x32, K6-2/5000+, 1920x1200)
HD 5850: 9fps (C10.4, Ubuntu 10.10 x32, K6-2/5000+, 1920x1200)
You all can imagine my surprise that…
…the “crippled” Fermi is the fastest - and faster with the new ForceWare
…the cheapass 9500GT is incredibly fast. (60 Euro, passive cooled) I didn´t want to test it at first because I thought it´ll be substandard.
…the HD5850 performs so poor - which surprises me.
This first result basically makes fun of me, many know I am one of the advocates to spread the word on Fermi´s bad OpenGL performance, but it seems so far Fermi leads, but compared with the 9500GT it´s an ridiculous result.
Thats where I need the communities help!
First of all, the testing methodology:
You can either mail me your results to [email protected] if you don´t want to post, or, obviously post in this thread.
- Be sure to turn OFF! vsync, else it will cap at your screens refresh rate.
- Be sure to have VBO ON!
- Use the highest resolution possible or your screen native resolution.
- Use the Official Blender Builds, if this goes on longer, use v2.57b it´s what I started with.
- Try to use WHQL/stable drivers
If you vary from those settings because you know your cards perform better with other settings, please note it in your report. If you don´t know how to do one or any of the above, please don´t participate
The scene is set to cap at 120 FPS which I consider plenty for 370k triangles. If you got the feeling there´s more in your card, subdivide the sphere once more, be sure to APPLY the modifier and run it again. Note the change in the form below as well and be sure the screenshot shows the amount of faces. In this case as it is an Icosphere faces = triangles.
What I need is (you can quote this):
Graphic card type:
Graphics driver version:
Number of graphic cards in system:
And a screenshot to have “proof” , not just the fps.
I´d prefere to have screenshots like this one, .jpg(100%) or .png else the font gets hard to read like below:
Just wide enought to see the amount of faces (forum cuts the image off, rightclick/view image or use the horizontal scrollbar at the bottom of the post) in the scene and high enough to see the fps for the PDF.
I need the memory amount, amount of cards and mainboard type to determine if any issues (low memory, PCIe slowdown due to several cards, bad chipset) are responsible for single rather odd (fast or slow) results
And I know it isn´t exactly scientific to determine the “best card” this way, if you want feel free to donate me cards to test them all in one refrence system , but it should be good enough to determine where which card is ranked approximately and help the community out.
I especially want to find out if the 5850 idd has poor OGL performance too, or if it is my fault. I tried it in 3 systems though. And it seems very whatever else it is dependent. The weaker the system the more impact on the performance, but once you see the scene you´d notice there´s not much to do for the cpu.
It is a simple Icosphere with applied subdivision, doublesided=on and a camera flying around it endlessly. It has GLSL on and textured solid on. Sometimes it is not opening with camera view, so press num0 to do so if not. Start it with Alt+A
And if you turn off textured solid, at least my GTX470 performs much worse (-30fps)
That´s why I said “not very scientific”, we would have to benchmark solid view, solid view GLSL, solid textured, textured view, wireframe… but see the “Goal” section.
Now for the cards I need/want/miss (only a few )
Nvidia Quadro 2000 (this one I need badly to compare against Fermi GeForce)
Nvidia Quadro 2000+ (any Fermi based Quadro - no Quadro FX)
Nvidia GeForce 285
Nvidia GeForce 260
Nvidia GeForce 460
Nvidia GeForce 560
Nvidia GeForce 480
Nvidia GeForce 570
Nvidia GeForce 580
Radeon HD 5850
Radeon HD 5870
Radeon HD 6850
Radeon HD 6870
Radeon HD 6950
Radeon HD 6970
Radeon HD 4850
Radeon HD 4870
If anyone want´s to supply “budget” or low-end cards I´ll take those too, but I don´t really need them because it´s more likely people will upgrade from those cards.
Not sure. Satisfy my personal interest, find out if Fermi is really that bad, prevent the weekly “what´s the best graphics card for blender” - thread.
The long term goal however if it works out, maybe create a nice .py testing scene, running through various display modes, CPU performance and create something like the old blender benchmarking homepage, which sadly is discontinued.
I think it´s the perfect method to provide the community with information on hardware upgrades particular for blender and single out components that ain´t good for it.
I tried one of my production plant prototype models (3.7 million polys) in Blender and Max on the GTX470 to compare viewport performance:
3dsmax 2011 x64 OpenGL: 0-1 FPS
3dsmax 2011 x64 DX10: 4 FPS
Blender 2.57b x64 OpenGL: 9 FPS