Nvidia GeForce GTX680 released...

Yeah, the GTX680-cards are out. My local place here have them in stock. :smiley:

Re. the speculation around the GTX680’s being cheaper than earlier cards, well - NO. We’re talking $700-750 for a 2Gb card here in Sweden… Not sure I’m willing to buy one at that price, gotta check my books first, hehe… :stuck_out_tongue:

First benchmarks with OpenCL and Luxmark are really bad…the GTX 680 sometimes is even slower than a i7 2600K…
Source: http://www.computerbase.de/artikel/grafikkarten/2012/test-nvidia-geforce-gtx-680/20/#abschnitt_gpucomputing (German article, but the charts are self explaining)

MSRP is $500 in the US, a full 100 less than the AMD card it competes with. The benchmarks I’ve seen put it above the 7970 black edition card. It is also apparently a CUDA beast… As far as I can tell those OPENCL tests haven’t been substantiated anywhere else yet, but buying an nVidia card, I feel like OpenCL is low on the list of demands anyway. I’ll be picking one up today if I can find one.

First benchmarks with OpenCL and Luxmark are really bad…the GTX 680 sometimes is even slower than a i7 2600K…

However that page also shows that it can compete fairly well with the 7970 in DirectCompute. Sometimes it’s better, sometimes slower - which is to be expected, the 7970 still has the higher core count in general - but it is only faster in the simpler benchmarks.
Hoping to find some CUDA benchmarks now…

I’m a bit disappointed about it, reviews say that “the 680 gets absolutely decimated in 64-bit floating-point operations, as Nvidia purposely protects its profitable professional graphics business by artificially capping performance…” during Luxmark benchmark. source tomshw dot com

What does it mean? No performance boost in comparison to the 500 serie, regarding gpu computing? If so, why putting 1500+ shader units?

Any clarification is welcome…

Seems like the 600-series was a disappointment, I was only waiting for it to come out so the prices would drop on the previous cards anyway so I’m happy either way :stuck_out_tongue:

How can you conclude that it’s a disappointment? The only place it lacks (more than likely due to drivers) is OpenCL, which is a non issue when nearly all gpgpu programs use CUDA anyway.

Luxrender uses OpenCL, VRAY uses OpenCL… :wink:

I would love to hear peoples render times using it with Cycles later on :slight_smile: I think it is safe to conclude that 680 will work great with games. But who knows what they have done to the hardware in order to increase game performance? I am afraid the CUDA performance and such might not be what we have hoped for. At least some GPGPU benchmarks I have seen has not been promising.

But I won’t buy anything soon anyways. I will wait for the 4GB version, 2GB feels too little for a new graphics card to be honest. Especially if used for rendering, and I don’t want to upgrade again for a while :stuck_out_tongue:

Speaking for my self, the disappointment was the pure gaming target, and the capped performance due to preserving Quadro series, BUT, because of not being a expert at it, i was precisely asking for clarification about the chance of non getting higher performance in GPU rendering…

I’m only trying to understand if this series of gpu would boost (a lot) performance in such renderers like Cycles/Octane/Arion…

Where exactly does it say that? That is complete nonsense, no renderer that is supposed to perform fast would use DP, and SLG/Luxmark certainly doesn’t.

What does it mean? No performance boost in comparison to the 500 serie, regarding gpu computing? If so, why putting 1500+ shader units?
I think the other (DirectCompute) results show quite clearly that there is a significant improvement in most cases. I guess it’s safe to say that the OpenCL driver “sucks for now” but it it will likely be improved. Let’s wait for CUDA benchmarks of the relevant renderers before jumping to conclusions.

EDIT: OK, I read the article. It does say that what you quoted, but it refers to the Sisoft Sandra benchmarks on the previous page. It just so happens that the Luxmark scores are immediately followed by that statement.

Regardless of it’s performance in OpenCL (Haven’t seen CUDA tests), which are lackluster at best, the 2GB’s of RAM is a sever limiting factor to it’s usefulness as a rendering card.

Best wait until they bring out a 4GB card, or go with a AMD 3GB card.

Exactly, capped DP ops don’t mean much for people concerned with rendering.

I’m not sure I understand this line of thinking. A couple more gigs of ram aren’t going to suddenly make production rendering possible on GPUs, especially when you have scenes with close to 100gb of texture data alone. I have a feeling that we are still a few years out from seeing GPU rendering be useful in a production sense for the 3D field, unless someone can come up with a sufficient algorithm to cache and unload memory from a card without the horrendous bottlenecks involved with modern techniques.

Sorry, but you’re disappointed that the GeForce series dedicated to gaming is targeted for gaming? Expecting otherwise is rather escapist and can only lead to disappointment.

All in all it’s a great card, but until 4GB versions are out and CUDA + OpenGL performance was tested by some beta-buyers there’s no new card from nvidia in my small world :wink:

I am sure Herb or José can’t resist and get one sooner or later and make a looong post about it :slight_smile:

And what would be intresting is, if the new Boost feature will also be triggered by “CUDA-load”.

2 GB are a poor choise for gaming as well. If you play in 1920x1200 with 16:1AF, 16AA of any kind and high quality textures and on top of that in an engine with shadowmaps like Skyrim, you’re doomed. One does not simply make a card with enough power to play with those settings but with lack of memory.

I’m not sure I understand this line of thinking. A couple more gigs of ram aren’t going to suddenly make production rendering possible on GPUs, especially when you have scenes with close to 100gb of texture data alone

What percentage of all 3D being created requires 100GB of texture data? 1%? Generally for many production houses there is a leaner workflow based on getting a product out asap, scenes are optimized for both render times as well as man hours. What you are looking at here is a built in render farm for the large majority of studios.

There have already been several reviews saying that playing at 2560x1920 on all maxed out settings caused no problems with framerate despite only having 2gb.

Perhaps, but I’ve never seen a production scene (with characters, etc.) with fewer than 10 or 12 gb of textures. Even that won’t be seen in reasonably priced cards, if at all, for at least a couple of years. And considering that by its nature gpu rendering can’t share memory load across multiple cards, doing anything other than maybe motion graphics or simple scenes will be a pipe dream for a while yet. It’s a great tool for setting up lights and whatnot, but the simple fact is that when it comes time for the final render, we’ll still be switching to cpu mode until something big changes, be it the software or the hardware.

About OpenCL - wait until they perfect the driver. And was the Nvidia cards ever very fast in OpenCL? Pretty obvious they’ll rather promote their own standard… I personally don’t care; Cycles is CUDA, PDi is Physx/CUDA and so is Nucleus.

About CUDA - 1536 cores are 3 times more than the GTX580, I’m pretty sure it will be 2-3 times faster in Cycles. OGL will probably be worse than my GTX285…

About VRAM - seeing that CUDA does technically allow for use of the machines RAM I for one will assume that Brecht & co. will solve the scene mem. usage vs. VRAM issue as it stands today - otherwise Cycles will be pretty useless in the long run…

No, you should look better at the Luxmark page, i quoted it from that benchmark.

http://www.tomshardware.com/reviews/geforce-gtx-680-review-benchmark,3161-15.html