GTX 680 and Cycles performance...OpenCL instead of CUDA for Cycles?

Okay, at first I was really excited for the GTX 680, I was planning to upgrade to an nVidia GTX 6xx series card this summer, and was actually planning a build. Then I started seeing stuff like this on its GPGPU performance, http://www.tomshardware.com/reviews/geforce-gtx-680-sli-overclock-surround,3162-13.html

This looks like pretty bad news for those of us wanting to get a speed up in Cycles. Do you guys think we’re going to see OpenCL support get prioritized in Cycles now?

No-one knows how the GTX680 is with Cycles and CUDA as yet as, for the moment, it doesn’t work at all. When it does we’ll have our first hint. After that I guess it’s about drivers and what difference that might make. I heard, probably not much difference with CUDA, but could make h*ll of a difference re. OpenCL…

So, for now, we’ll just have to wait and see… I for one have a very hard time believeing Nvidias latest card will be slower than their next last card, that just doesn’t make sense, not even if the want to keep the pro’s using Quadro… So I bet it’s slow just due to poor drivers…

While all’s hot around nv new series, i stumbled across last year’s SIGGRAPH’s nv representatives speech on GPU raytracing. haven’t seen it mentioned here in forum. At some point they just smile in your face and sweetly comment that concerning Nvidia’s products all that relates to Rendering does start with letters Q and T, not R or any other. Also, when i open Nvidias home page and look for Rendering, Workstation, that kind of words, i’m immediately taken to Q and T product pages.
So i take that there wont be Rendering for masses on Nvidia’s next generation cards we all so eagerly wait here. I clearly understand that there are many, who can afford some two or more 600-700$ priced freshly painted PCB’s swap in for pure gaming (do you need that for office? i doubt)[oh, and is gaming done on PC’s nowadays at all?], but does it’ll help Nvidia get on top of the today’s market? I tend to agree to some saying, that could be this Q and T segment they are interested in. Also mentioned OptiX somehow plays well here. Not so well if Cycles and Rendering for Masses are concerned.
For me - i’ll take back rows for a while and see if Cycles designers will force me to cash out for certain Compute capability number.
Looking at what’s written in Cycles compatibility, i’m way outdated, but, hey, it still works.
So what do you have to say on OptiX, Kepler and Cycles on one hand and Nvidia Quadro,Tesla and again, OptiX GPU Raytracing?
And what will happen IF Kepler on hard level or drivers will crrrippple even more Cycles (CUDA), OpenGL? Or CUDA will be buried behind OptiX wall?
As Nvidias speech man say - No,nono, GeForce and yeah, Kepler, that’s not meant for a rendering! How could you even think of!

If nVidia think they are going to force everyone on to Quadro for rendering, they’re wrong. I think the reality is that nVidia realise that OpenCL is the way forward, so they’ll sideline CUDA as a lucrative high-end/pro/scientific thing.

As soon as Blender supports OpenCL, we’ll be able to buy ATi / AMD cards that do our Cycles rendering much quicker than an nVidia card. It’s just a real shame that Blender has been forced to set off on the wrong foot (CUDA) due to AMD’s poor OpenCL support.

What sort of time frame for OpenCL support in Blender are we looking at?

Don’t know, but here’s the problem… http://wiki.blender.org/index.php/Dev:2.6/Source/Render/Cycles/OpenCL .

And a chart like this shows how far ahead ATi/AMD cards are for OpenCL: http://www.tomshardware.com/charts/2012-vga-gpgpu/15-GPGPU-Luxmark,2971.html .

Think I might just get a 560Ti in my new PC, and wait for the day I can get a stonking AMD OpenCL card for Blender.

The quadro lines are priced at 4x to 5x what a gaming card is, but Intel Xeons capable of working with other CPUs/dual sockets are only 50-80% more than their consumer versions if I remember correctly, and if not, AMD CPUs are cheap.

I know that randomcontrol Arion developers are reporting that gtx680 is not fast as nvidia says…
I´m thinking, how can 1500 cuda cores at higher clock and memory bandwidth be only little (20%) more efficient than gtx590 in computing?
What´s wrong here NVIDIA!!! Drivers? or … something more obscure?!

Kepler:
http://www.pcper.com/reviews/Graphics-Cards/NVIDIA-GeForce-GTX-680-2GB-Graphics-Card-Review-Kepler-Motion GTX 680 including block diagrams

vs

Fermi:
http://www.pcper.com/reviews/Graphics-Cards/NVIDIA-Fermi-Next-Generation-GPU-Architecture-Overview

It’s an apples to pears comparison. There are architectural differences.