7970 opengl Blender benchmarks and misc Blender opengl findings (warning! LONG read!)

I’m a bit confused, I always thought Nvidia card is better in blender because of the better opengl support on driver level, or it’s been changed now?? as I’m about to upgrade, would you still recommend Nvidia card (like a 570/580) over amd card? at least it’s got CUDA support for Cycles (which is a big deal), right?

ps currently I’m using amd 3850hd (512m) which is doing ok, it got some driver issues in blender earlier but most of the unbearable bugs have been fixed now.

This is so frustrating!

  • The NVIDIA GTX 400/500 series has crippled OpenGL performance.
  • The NVIDIA Quadro series is prohibitively expensive.
  • AMD Cards have buggy drivers and can’t help with cycles.

I’m building a new rig soon-- what the fuck am I supposed to buy?

I’m a bit confused, I always thought Nvidia card is better in blender because of the better opengl support on driver level, or it’s been changed now??

To make a very long story very short: NVIDIA has, in an act of pure corporate evil, removed double-sided lighting from hardware support in their Geforce (Consumer) line of cards in the Fermi (GTX460+/560+) generation - causing viewport performance in solid mode to become much worse. Presumably they did so in an attempt to force all blender users to buy their expensive quadro (professional) cards. In order to restore performance, you must disable double sided lighting for every object. (object data -> double sided; you must be in blender render mode to access this setting)
In the future, there likely will be workaround to this issue built into blender.

as I’m about to upgrade, would you still recommend Nvidia card (like a 570/580) over amd card? at least it’s got CUDA support for Cycles (which is a big deal), right?

Right now I recommend you wait until NVIDIA pushes out its new generation of GPUs, which should happen within the next two months. You can expect that generation to be significantly faster (especially with CUDA) and that GPU prices will drop significantly.
AMD might at some point provide a driver that is capable of compiling Cycles, but who knows when that’ll happen? CUDA is definitely the more stable solution for the time being.

WOW! Nice trick! This, along with disabling double sided lighting increases performance x10 or even more with a basic subdivided cube. 6mil polygons is now possible for me on a new nvidia card. If only this could be fixed default in blender. Looks like Brecht is onto the double sided lighting performance issues already so that’s a good sign!

Wgan, Nvidia consumer line of cards used to be the best choice for opengl up until the 4xx line of consumer graphic cards were introduced. Similar to your experience, the current crop of amd/ati opengl drivers seem to be doing quite well running Blender.

In a nutshell:

  • 4xx and 5xx opengl drivers are crippled, and perform like a two gen old card (comparable to a 9800gtx). Nvidia did this to “protect” their Quadro line.
  • opengl performance is great on a current ati/amd card (windows/mac). (unless you happen to be working in textured mode, which is dismal - even compared to crippled nvidia cards)
  • on amd/ati cards, due to deprecated gl_select method used in Blender, selection of objects becomes extremely slow once you hit semi-higher poly counts. This, however, can be mitigated by using a build that includes Psy-Fi’s occlusion selection patch. In that case opengl and selection works extremely fast - even much faster than gl_select on an Nvidia card (yes, this includes Quadro cards)
  • gpu-accelerated opencl rendering on cycles does NOT work yet on amd/ati cards. They are working on that, so hopefully at some point in the future it will
  • you will need a reasonably fast/current Nvidia card to enjoy cycles to the fullest at this point in time.

So:

  • to get great opengl performance for a lower budget: get a current amd/ati card, and use a build with occlusion select patch OR get a nvidia 285GTX
  • to get good cycles performance, get a current Nvidia card.

For a reasonable price you cannot get both. Choose: either great opengl performance, or good cuda performance. Or use Luxrender, which does support hardware opencl hybrid rendering on amd/ati (but still slower than cycles, being a completely unbiased render engine).

  • Or, if you have lots of money: get a Quadro 4000 or 6000 to sort of get the best of both. But some people here have bought a Quadro 6000, and its price did not merit the performance at all. Actually, opengl performance in Blender is not better than the better amd/ati cards.

Some people have managed to get a combo working on their system: a good amd card to drive the opengl, and a good 5xx card for cuda/gpu-accelerated rendering. But from reading up on this, I understand it can be somewhat of a hassle to get it working.

The NVIDIA GTX 400/500 series has crippled OpenGL performance.

No big deal! Just disable double sided lighting and you’re good to go.

I’m building a new rig soon-- what the fuck am I supposed to buy?

One of the new NVIDIA GPUs. I predict they will crush the 7970 :stuck_out_tongue:

So, with all that being the case, my previous post should be taken with these future changes into account :slight_smile:

not sure if it can be asked here, is there a mod version for Nvidia 4xx/5xx to turn it into a Quadro version like the old days? or is it legal (as I recall it was kind of a grey zone back then but the mod driver really works well)?

not sure if it can be asked here, is there a mod version for Nvidia 4xx/5xx to turn it into a Quadro version like the old days? or is it legal (as I recall it was kind of a grey zone back then but the mod driver really works well)

No, that doesn’t really work anymore. Don’t worry about the OpenGL issue, it’ll be taken care of. Until then you can manually work around it. Apart from the double-sided thing, blender doesn’t really use any of the Quadro features.

same here… if blender can’t utilize 3GB worth of Vram properly… I don’t see the reason I should buy it?
the PC I’m building is mainly used for blender.

could blender utilize 3GB worth of Vram?

buy Nvidia…

not only can it do everything ATi does, but it also has

CUDA and Physx

CUDA is used in tons of software, From Adobe to Blender, Octane render to 3D-Coat,
to Video Transcoding apps

Physx is used in Games , as well as Some 3d App Viewpoints Like Maya

The New Nvidia cards will be coming out in April and will be better then the Current AMD’s.

Don’t see why not, should come in handy for sculpt mode, certainly. :slight_smile:

And especially the textured view could benefit from the higher shader performance - which at this point is pretty much useless in Blender on ati/amd cards for certain scene such as the goose and falcon. Though the game engine is not.


tempurpedic cloud luxe reviews | tempur cloud supreme | craftmatic bed cost

Cycles gpu rendering is restricted to scene size <= Video Ram. Means you can only render (gpu) scenes that fit into your gpu ram. the reason why i’l choose the gtx 580/3gb. Could be that this restriction will be gone, but according to brecht not any time soon…

…there are other solutions. For example, Luxrender makes use of a hybrid approach that speeds up rendering 2 to 4 times, depending, whereby both the cpu and gpu are utilized in opencl. This method offers some of the benefits of gpu rendering, while still being independent of video ram. The drawback is that it is not as fast or optimized.

The Luxrender devs state that a pure gpu render approach could increase render power by a factor of 10.

@Rocketman, if I were to build a rig today I’d focus more on rendering than viewport performance, there are a million ways around the opengl issues, proxies, reducing subsurf (isn’t there an optimized view button now?), not viewing everything in full blown rendered mode- these things have been around for years. I think it’s cool that Psy-Fi chimed in though, he seems to know his way around opengl :slight_smile:

the the game engine code works better why cant that code be used in the other parts of blender? would it prevent editing?

I´m about buy a new rig, and with the poor performance of new gtx 600 (g104) in open gl (also in render) in most of 3d programs, I was thinking in the 7990 (mostly for 6gb ram for sculping), but it´s sad to read that have no use in renders like cycles, octane and similars. I´m actually using Maya and mental ray, so at least in viewport I know will behave well, but what in rendering with MR? any feedback of work about this?.

Thanks!