GTX 1070 or GTX 1080 ?

The ultimate GPU upgrade decision…So I’ve been waiting for these cards to upgrade my GTX 660 Ti for a long time, it’s finally here. From GPU rendering engine perspective, GTX 1080 seems to be a better deal than 1070. The 1080 has a lot more CUDAs than 1070, so 1080 is a better deal based on cost per CUDA.

GTX 1080: 2560 (cuda) / $600 = $4.27/cuda
GTX 1070: 1920 (cuda) / $380 = $5.05/cuda

My system:
Xeon W3680 @4.2 GHz
24GB RAM
Quadro 7000
GTX 660 Ti
3 1080p monitors

What do you guys think?

You may need to work on your math skills there, division is noncommutative.

380/1920 = about 19.7 cents per cuda core
600/2560 = about 23.4 cents per cuda core

Of course, the price per cuda core is hardly the only important factor.

Also, there’s already a thread for those cards…

Oh yeah, I just had a major brain fart. lol. Thanks for the correction, I better get some sleeps now.

Just my two cents here, but I read that the 1080 has memory twice as fast as the 1070, so I suppose that the 1080 is much faster, maybe even twice as fast as the 1070, but not twice as expensive, if that is the case, it seems to me the 1080 is the logical choice for rendering!

The first question to have ask here is: Does the 1080 is faster than a 980TI and how many percent of render time compare to percent of price. Today, no one knows how fast is a 1080

http://blenchmark.com/gpu-benchmarks

Hi, this Blenchmark is not really helpful although it was a lot of work to create it.
For example a single GTX TitanX need 55 seconds but four TitanX need 24 seconds. ?
Nobody knows wich Blender Version was used, GTX 10xx need Cuda 8, how fast is a GTX 980Ti with Cuda 8.
And so forth.
We really need a benchmark application from Blender.org like Luxmark, Octanebench, Arionbench, Benchwell (Maxwell), and so forth. :smiley:

Cheers, mib

Completely agree.
These blenchmark results are confusing and contradictory. We need something like Luxmark, with heavy different scenes including slow materials, and not simple scenes such as the BMW. It is also important that the application certifying the results, avoiding possible cheating. For example with nvidia/amd fanboys.

980ti supports cuda 8? I heard it would be exclusive to pascal and above?

The CUDA 8 SDK/runtime supports older architectures as well. Conversely, older CUDA SDKs don’t support building for Pascal.

However, not all features introduced in CUDA 8 will be available on older architectures. For instance, Unified Memory with system RAM fallback only works on Pascal.

Moved from “General Forums > Blender and CG Discussions” to “Support > Technical Support”

GTX 1080 is short supply these days. New Egg is selling GDDR5 version of GTX 1080 by Asus and it’s in stock!
https://www.neweggbusiness.com/Product/Product.aspx?Item=9B-14-126-110&nm_mc=KNC-GoogleBiz-PC&cm_mmc=KNC-GoogleBiz-PC--pla--Video+Card±+Nvidia-_-9B-14-126-110&gclid=CK_jqNDkzc0CFYqDfgodlm8HsQ

I’ve been rendering on a few different machines lately, using Blender 2.78.

Half of the machines have 1070 cards, and the other half have 1080s.

Using identical .blend files, the 1070s render more than twice as fast. Yes, you read that correctly.

Has anyone else experienced this, and is there an explanation?

Also add the 1070Ti to your list of considerations.

I went with GTX 1070 last year when i upgraded by its an SLi motherboard and 800watt PSU so i have the option of putting a 2nd GPU in. You may want to look at what used GPUs flood the market after the cryptocurrency meltdown, or which ones Nvidia has too many of. My guess is there will be more 1070s they need to get rid off than 1080s, meaning beder prices.

I use 1070 along with Ryzen 8 core in blender and both together make for very fast renders, so fast that you wont be worrying about paying a little more to knock a few mins of the render time.

Sounds like a utilization of the GPU issue, maybe tile size for the render. Or it could be that some of the systems are making use of the CPU +GPU hybrid render, and the 1080s for some reason using GPU only.
To clear up confusion, in my version of 2.79 i select GPU render, in order to use the hybrid +CPU+GPU rendering. I select CPU render to use just CPU.