Anyone running AMD 7970/280X/290X?

Just getting back to Blender.

Just built a new Rig, and reusing my older GTX 460(256bit)1GB, but also bought a new Asus 1440p.

Ill be needing a newer GPU sooner than later.

And Ive heard AMD dose better with Blender.

Just wanted some feedback from AMD GPU users. And if any has had decent luck with Nvidia’s Kepler Geforce GPU’s & Kepler having VeiwPort performanse, lag/latency issues…

KS

Wow, no Love, lol!

Just for blender, you won’t need a new card. If you get one, any of the above will work fine, but similar Nvidias will, too.
I have never heard of such issues. Plus, if you consider rendering on GPU (with Cycles), a Nvidia is maybe better for you. Although it gets better, GPU Rendering on AMD Cards is barely working at the moment.
GPU rendering tests of a 290x would be interesting btw…

It’s true, without taking some precautions an Nvidia card will run Blender’s viewport noticeably slower (MUCH slower) than an AMD card.

Please read: http://www.blenderartists.org/forum/showthread.php?257323-What-everybody-needs-to-know-about-Blender-Viewport-performance&highlight=zalamander+viewport

I get great Opengl performance with my 7970 - however, no CUDA GPU rendering. That is why my second video card is a Nvidia 590, which suffices for most of my render tasks.

Nowadays Quadros and FireGL cards seem to perform drastically better again in most 3d apps - because the opengl drivers for workstation cards have been optimized (or rather, the consumer grade drivers have been “down graded”).

Having that said, I have not seen a real comparison of Blender’s opengl viewport running on consumer graphic cards and professional graphic cards. And it depends on the 3d software: for example, in Cinebench R15 I get 87 fps on my 7970 on my aging system with three (large) screens. A system with an [email protected] with a Quadro 4000 ($750 card) runs the same test at 65fps.

Too many factors really: it depends on cpu speed, number of cores, the software, whether custom workstation drivers are available for the 3d software you use (for workstation cards Max and Maya have severely optimized opengl drivers), and so on.

Check these scores, and decide for yourself:


http://www.cbscores.com/index.php?sort=ogl&order=desc

Outdated, but still quite informative as to how the results can differ wildly depending on the hardware-software combo:

It’s worth noting that these aren’t real-world benchmarks but rather snippets of the respective applications sent in for the SPEC OpenGL benchmark. Both Maya and 3DSMax use Direct3D by default these days. “Professional” OpenGL is getting less and less important and it’s really by mere coincidence that Blender uses a feature that NVIDIA considers “professional”. A modern OpenGL viewport would not use any of those features at all, they’re deprecated since OpenGL 3.

People need to stop spreading the info that OGL perfomance is impacted because cards have been somehow “downgraded”. The real issue is that nVidia removed actual hardware acceleration for LONG deprecated OGL calls that Blender just happens to still use by default. We’re long overdue for updated viewport drawing, and luckily it’s in the works.

According to the test I read here, the Nvidia Titan performs a third better in OpenGL4 than the R290X … but due to crappy AMD OpenGL drivers and good ones on Nvidia. On the other side, the R290X utterly destroys the Titan in OpenCL benchmarks (more than twice faster!). Gamewise in FullHD, both perform similar, with the AMD having a tiny edge sometimes. In very high res (256x1600 an dup) the AMD is quite a bit faster.

This sounds attractive: If Blender Renderers (Cycles etc) can use OpenCL, then this would mean the R290X is a blazing fast blender card. But I’m not sure how well Blender supports OpenCL? Am a newbie there.

But they need to get to 20nm first … the current R290X has a temperature limit of 95C (wth?? That can’t be good) and gets very loud (approx. 5 Sone) when its performance is used. It consumes power … 375W peak measured, 286W in Furmark (Titan: 238W in Furmark and 2 Sone).

(source: Ct Magazine 24/2013, Page 70 onward. german)

Hi knightsilver, the GTX 460 is still a very good card and if you use Cycles with GPU it is no way around Nvidia.
If your mainboard support it you can use a AMD card for display and the GTX 460 for Cycles GPU render.
If Cycles work with OpenCL for Cycles you can use both cards.

Cheers, mib.

You can`t properly use Cycles with AMD card yet, you are better off buying a Geforce. And as @m9105826 said viewport speed improvements are coming.

The reviews rolling in for the 290 really make me wish AMD cards would render with Cycles. It’s a beast in nearly everything, and handily beats the Titan in most benchmarks. Compute included.

I think competition always benefits the end user, unfortunately this is a niche that I guess is not that important to AMD. Otherwise, you know, they would actually fix the problem in these 2 years.

AMD continuously have dissapointed me as a customer be it drivers or too hot temps on their hardware. Nvidia so far have been consistent and I don`t mind to pay more for a product that works for me how I need it to work.

Im wanting a bigger GB vs my current 1GB (GTX 460/1GB_256bit), Im “now” running single 1440p monitor. The 460_256bit, dose well with my viewport, redraws. I dont care about GPU rendering. Ill build myself a simple server later on, for renders/files.

ViewPort performance in heavy load, is what Im wanting, I do game some, Upcomming Star Citz(if it see the lite of day) BlackMesa/HalfLife(s), SeriousSam(s). Main Apps, Blender,Gimp, Sony Vega or Lightworks.

KS