Nvidia announced GTX 1080 and GTX 1070

It doesn’t take an “expert” to predict that a company that has less money coming in than going out will not stay in business indefinitely. If business doesn’t improve from where it is now and there’s no outside investment (or buyout), AMD will go bankrupt.

Whether that happens depends on a lot of factors, not all of which are under AMDs control. For example, even if the Zen architecture ends up competitive with Intel, CPU prices may drop to where profitability is once again too low for AMD to shoulder their debt burden.

For instance, so-called experts telling you to buy gold because the stock market is about to crash. They even go as far as to try to extrapolate a trend or draw conclusions from one to two day events and almost always end with egg on their face.

I’m not sure what your point is. Some self-proclaimed experts turned out wrong, therefore all expert assessment is likely wrong?

Of course there’s a chance that AMD recovers. If you want to bet on that, maybe you should buy AMD stock. Otherwise, don’t buy AMD stock, because considering all known factors, you’re more likely to lose money than not. Some of the best investment advice is on where to not put your money.

At 2560 cuda cores the 1080 is going to be significantly slower in rendering 3d scenes in comparison to the titan black (at 2880) let alone the titan x at 3072. And at 12gb vram the Tx still kicks ass. The only advanage is the lower power consumption. So this will be good for gamers, not so for 3d rendering.

Personally I’m waiting on the Volta range and skipping these gimmicks entirely.

And what benchmarks are you basing that on? :stuck_out_tongue: Also the Titan X is twice as expensive. I think the 1080 will be decent, the 1080 Ti will be great. Benchmarks will be out the 17th, I hope someone tests CUDA performance too.

Quit lying to yourselves. AMD loses more and more market share every year and it’s no secret they’ve looming financial issues. At this point it’s a question of if they can get their next line of GPUs and CPUs out in time and good enough to at least compete for the mid-range (still compromising and loosing out) market. I mean, let’s face it. Intel and Nvidia products side by side to AMD’s suppose to be competing products just lose or barely compromise every time. Things like HBM and more cores are great innovation, sure, but they’re just not doing it well enough.

They’ve been on the same bastardized architecture since 2011! Only the most recent products are really different, and they didn’t push the envelope like they needed.

It is likely that Pascal cores are just Maxwell cores at worst or improved at best. Even taking out 16% of its cores the GTX 1080’s base clock is 60% higher and that’s just insane. That’s why you can see people selling their last gen cards like there is no tomorrow, these new cards are just that good.

If I had to guess, the GTX 1070 will come very close to the GTX 980 Ti performance and the GTX 1080 will destroy it, Titan X included.

That was just one sample and was never replicated or heard of again. Way to jump into conclusions.

Nvidia has already handed the two cards out to some bench test sites, the testings are already underway. We’ll find out next week when Nvidia lifts its gag order on May 17th. I’m still a little skeptic about the performance claim. If it is true to the claim which I hope it is, then that pretty much kills off the sales of remaining inventory of GTX 900 series. Unless they start to selling GTX 980 at $300 price range. In any case, it’s an exciting time.

just a bit OT - AMD cards are in macs, PSs, xboxs, supercomputers… so i wouldn’t really claim the end of it. If anything they will restructure.
BTW i hate nvidia diminishing OpenCL!
Still on 1.2, barely?!?

Prices are excellent, performance per watt as well. The GTX 1080 is gonna crush. I can’t wait to see how’s it gonna perform with Cycles, LuxRender or V-Ray.

People might end up without cards for a while if they are already selling them, because this was just a press release and the cards aren’t even available yet (some people claim they moved up the announcement on purpose to get the jump on AMD).

At John Lancaster; It is true that AMD did make a few notable mistakes in the last few years and are now trying to get back on their feet, but it’s not like Nvidia likewise has been immune to missteps or even outright deception (with the difference being the ability to get away with anything because of their brand’s positioning as a “lifestyle”. As a result, a number of people will simply buy their cards anyway because of an emotional attachment).

They already restructured, several times. Consoles are really low margin markets, macs are low volume and supercomputers are nearly 100% Nvidia. CUDA is that much better than OpenCL on AMD.

At least it is working, not like AMD’s implementation that never really fully worked.

As market availability comes closer those will get harder to get rid of. And it’s not like those selling them don’t have an older backup card or the integrated graphics from their Intel CPU.

Also, they’re not jumping on AMD. AMD would need at least a couple of outstanding generations to get rid of its underdog stench. AMD seems to always get the timing wrong for absolutely everything. They either implement something too early or too late. I mean, what’s the point of having better DX12 capabilities than Nvidia when there are no games taking advantage of it. Now the next generation launched and Nvidia has the same capabilities as AMD. Nvidia stripped down all the compute capabilities from Maxwell arch and rolfstomped AMD in gaming scenarios with a pure gaming GPU while retaining the compute market because AMD can’t fight the ancient Kepler chips.

I don’t think that AMD will go bankrupt anytime soon but the brand is reaching new lows every quarter both financially and in brand value perception. And they better score a home run with Zen and related APUs or they’re going in deep.

Just got a Nvidia developer mail with some more details:

It actually was independently verified and it stayed in the news for a while. As a matter of fact, you are jumping to conclusions by claiming that GF’s 14nm process is “more advanced” (whatever that means), presumably based on the fact that 14 is less than 16. The nanometer metric is really just marketing, the actual feature sizes in those processes are all different depending on the vendor and the individual process - Intel’s 14nm are smaller than Samsungs!

As I said, AMD is likely to use TSMCs 16nm process as well, simply because Globalfoundries is smaller and its lack of capacity has caused AMD trouble in the past.

Consoles are really low margin markets, macs are low volume and supercomputers are nearly 100% Nvidia. CUDA is that much better than OpenCL on AMD.

AMD doesn’t earn on console margin (which might be negative), it earns on the contracts for their semicustom designs, which have been a decent chunk of their income and will continue to be, since they won them also for the Nintendo NX and the (rumored) Playstation/XBox successors.

Considering that consoles might get shorter release cycles and stick to x86 for backwards compatibility, that business might stay an important source of income, since NVIDIA cannot offer an single-chip x86 solution and Intel is not focused on GPUs. That scenario would make a Microsoft buyout all the more interesting.

As for supercomputers, those are also quite low volume (many don’t even use GPUs) and the margins aren’t as great since the GPUs are sold in bulk.

At least it is working, not like AMD’s implementation that never really fully worked.

Go to tell this on the luxrender forum you wil learn lot of thing about nvidia opencl driver. I prefer to use lux as a good base to compare both vendor Opencl driver an compiler because it’s the most advanced opencl GPu renderer that work with opencl on both amd and NVIDIA.

tell that AMD is not far from diasappear is not new and anyone can’t gain or learn something on such statement.
tell that competition is good for everything is not new too. trying to ignor this is totally out of the focus, strange.

I don’t think that AMD will go bankrupt anytime soon but the brand is reaching new lows every quarter both financially and in brand value perception. And they better score a home run with Zen and related APUs or they’re going in deep.

it’s very easy to anyone to think like this a baby can do the same analysis. It is simply your own though and the easiest bet possible. it true it’s the reallity today.
What some want to tell you is there is still a possibility for AMD to come back. And this possibility is reallity too. and there are fact that it is possible :
like the fact that they already have 3 news markets in game ship area. they also have polaris and zen coming.

The Main problem AMD have to come is simple : “Money” “Money”

They still have right clever people to do the job (no matter the fact that they had sacrified lot of they workers). But they used to lost so many dollars that now it is more difficult for them to invest correctly R&D/marketing/human ressources, and Fight against two giants with lot of money.
We aren’t telling you that AMD will not fail for sure.

we said that they can succeed and this is the best scenario for everyone ! So this is the main here our final interest. Computer industrie is not a sport or a religion it 's simple busness.

currently AMD is in the worse situation But this doesn’t mean that use AMD product for sure everytime the worse choice. i’ve both Nvidia MSI GTX 970’s SLI AND R9 390 nitro crossfire.

And in My case rendering heavy professional project on NVIDIA 970 with cycles is 2x time and sometime 5x slower than the same rendering on Luxcore with the R9 390.

The end user must search it interest . If it is Nvidia i go with them if my interest is with AMD i go with them it’s mathematics.

Please can someone tell me how it is possible to us to gain or win if AMD die.

It is true that a lot is riding on their next generation of products (this would include their Zen CPU architecture and their upcoming Polaris and Vega GPU architectures, all of which come out this year).

Personally, I would like to see AMD really get back into the game (if not only for the fact to prevent Intel and Nvidia from obtaining monopolies in their respective markets and prevent potential price-hikes on anything above the low-end as a result).

People who want to help Nvidia get there may not realize that if they do that, the company could even do things like yank CUDA support from their Geforce and Titan brands (forcing them to give up on GPU rendering) and they would be powerless to stop it. Forced driver updates, doubling the price of their cards, eliminating the acceleration of certain graphics features, data collection, that too.

If you want to ride the clickbaiting wave and such be my guest.

I never said that, learn to read.

It earns money per every PS4 sold. Stop trying to reinvent the wheel or twisting what I said.

Also, their margins on semicustom are pathetic.

Tell that to Nvidia.

$279M at over 50% YoY growth rate with ~75% margins.

One different test with a different result doesn’t invalidate three comparable tests with with the same result. In particular, consumer reports didn’t stress test the chip, they tested the battery life for workloads like loading webpages, where the radio is going to eat up the bulk of the battery. It makes sense as a real-life usage test, but that’s not the topic.

I never said that, learn to read.

My bad, it was apoclypse who said that.

It earns money per every PS4 sold. Stop trying to reinvent the wheel or twisting what I said.

Yes, but not on the margin, which you said was “low”. I’m not twisting anything.

Also, their margins on semicustom are pathetic.

For one, those are not just for semicustom, but how are they pathetic? Here is revenue over income for NVIDIA (from the article you posted):

Are you mixing up net margin (profit) with gross margin?

Tell that to Nvidia.
http://www.nextplatform.com/2015/05/08/tesla-gpu-accelerator-grows-fast-for-nvidia/

$279M at over 50% YoY growth rate with ~75% margins.

Again, that’s 75% gross margin, which I won’t doubt (a single Tesla GPU costs several thousand dollars), but the HPC/Cloud segment is not identical to the supercomputer market. You can be sure those Teslas that go in a supercomputer don’t cost nearly as much per unit.

You have no idea what you’re talking about. Pascal nodes/cores both switch faster and are far more efficient than Maxwell cores. The result is a GPU which offers more for less… The consumer 1080/1070 will use the GP104 while Nvidia still has the GP100 chip which is being used in the Tesla line of cards and offers HBM memory, more nodes, cores, VRAM and a far higher bus width. It’s likely they’ll use this chip to make a premium Ti/Titan Edition type card further down the line.

Regardless of any of this your argument is redundant because by making a simple comparison between the FLOPS each card is capable of it’s pretty obvious which one is faster. The Titan has around 7-TFLOPS of throughput performance whereas the 1080 is up to 9-TFLOPS of throughout. For those of you who don’t know what FLOPS are, it stands for Floating-Point-Operations-Per-Second.

This will prove especially helpful for simulations and scenes with multiple lights, something Cycles will take advantage of with its physically-based rendering system.

However the 1080 and even the 1070 will most definitely be faster than a Titan or Titan X at rendering.

Having less VRAM doesn’t mean the performance will suffer either, this is a common consumer misconception which is used to draw buyers into buying the lower/medium bracket cards.
The Titan has a 384-bit bus width running on GDDR5 memory, whereas the GP100 chip uses a 4096-bit bus width running on HBM memory. Processing the data quickly also serves as an important factor, not just how much memory you have.
A 4096-bit bus means the card can push around much larger amounts of data simultaneously which means it doesn’t need as much memory to compensate, thus it can load in and flush the memory with far less overhead. Even in the case of the 1070/1080, they use the XGDDR5 memory which is much faster than standard GDDR5 which means once again, it doesn’t need as much memory… Are you catching on yet…?

However from looking at your older posts it appears you use one or more Titans in your system, so I can understand why you’d be butt-hurt after spending all that cash… Deal with it, you’re now second best - welcome to the world of computing hardware, it changes all the time.

yawn

Crap like that simply gets more boring, not less.

I can assure you I have no emotional connection whatsoever to a piece of equipment. It’s just a tool that’s designed to do a job. Regardless whether it’s a gpu or a screwdriver, it’s all the same to me. There’s no “butthurt” here over an inanimate object regardless what it cost me. Not that it’s any of your business.

sometimes toms hardware web site will list blender performance use on video cards - so you can compare. yet I don’t know how often the site runs the tests and updates

The real question is has some of you ever talked to a girl? (Cam girls excluded, pics and inflatable dolls are not valid)
As usual, a new product is just announced, not even released, and nerds start jizzing and then fighthing each others over nothing.
Wait at least specific benchmarks on your used softwares.
The only relevant teraflop I see is this whole discussion.