Nvidia Titan v

It is slightly confusing as usually Titans were priced near 1k and were marketed for “enthusiasts?” (similar to Intel high end). As tensor cores are not important for most* of the buyers, this begs the question if there’s gonna be another cheaper Titan or we have to wait approximately 1 year to get 2080ti with similar performance and acceptable price?

Rumor has it that NVIDIA will skip both Volta and HBM for the next consumer cards. There’s no point to having tensor cores in gaming GPUs without API/developer support in games, it would just cannibalize their professional market.

The Titan line is a lame excuse to make people feel better about spending more and more in lesser cards. For example the upcoming 2080 Ti (dunno the exact name).

Nvidia: “Just $1000 for the new XX80 Ti!”
Consumers: “Such a bargain compared to the Titan! Gimme five!”

Marketing geniuses I say. And a frigging market filled to the brim with sheep, at least the gamer one.

This sucker was done with the former pascal titan… imagine training and outputting face data in even less than 5 minutes.

As far as I know, the titan V has 33% (5120 compared to 3840) more Cuda cores than TitanXP and has 0.81 x the frequency of Titan XP (1200 vs 1485). So it should be 1.33*0.81 = 1.07x faster from the CUDA cores point of view. It has HBM2, which is 19% faster than the GDDR5X of Titan XP (653GB/s compared to 547GB/s). Let’s say (although it’s certainly not the case), that those 2 factors stack perfectly, the TitanV would be 1.07x1.19=1.28x faster. And this bench shows the Titan V shooting rays 2.1 x faster than the Titan XP…

Could their OpenCL implementation somehow use the tensor cores? Or are those CUDA cores faster? Or can someone explain this huge boost with the same power usage and about the same process (12nm)???

@bliblubli

The frequency isn’t stuck at 1200, it will go up to 1455Mhz (1582Mhz on the Titan Xp) if TDP allows (very likely on raytracing).

There’s many architectural differences (bigger caches, scheduling) that could play a role here, but also the drivers might simply be better. Still, even if some FMAs can be packed into Tensor Cores by a smarter OpenCL compiler, I doubt it would make such a big impact.

Dang, that thing is a monster if that graph is correct…

Problem is thermals. It gets hot quickly and throttles speed.

The question is - is Tesla really worth 10K ? The answer is - no. Nvidia sells it 10x the price because they can. They could sell it for 500 bucks and still make nice profit but they simply won’t as long as there are no real competitors on the market.

Best Regards!
Albert

@BeerBaron I also heard rumors about it in a podcast, that they will probably skip Volta arch for next generation of gaming cards. The question is what is coming up next? I have only seen presentations papers mentioning as far as Volta.

Ampere comes next.

yeah, that might be a problem… Maybe watercooling takes care of that… it was insanely expensive tho…

@Felix_Kütt cool, will be interesting!

A few readers on gaming websites are now predicting the death of AMD as a company because of this new card.

Hahaha, how many of them are going to pour 3000 dollars on a GPU for the purpose of playing games (though I wouldn’t be surprised if some do as the only thing that matters is how many frames they get per second, never mind the fact that most titles see decreasing returns with numbers above 60)?

The Steam statistics show that AMD cards have went from 24% last year to just 8% now. It all started with the mining crazy and that has left many developers of games just to ignore AMD entirely.

It’s possible they may not be far wrong, at least from the GPU point of view. It’s not really about spending 3000 to play games, it’s what this current card shows in relation to Nvidia R&D and really, just how far ahead they are from a gaming card point of view.

Lets face it, the AMD Vega 64 was going to be the next hottest GPU and in may ways it is, from a heat/power draw point of view, but from a gaming performance point of view, it has a hard time matching a 1080, while using around 60-70% more power. It even uses more power then a 1080 Ti, yet the Nvidia card leaves it in the dust for gaming performance.

When the 10 series was launched, many where impressed by the tech advances resulting in 30%+ improvement in performance, while using less power (ie a 1070 outperformed a 980 Ti, while using less power). All was expecting/hoping that AMD would respond and match/better the new Nvidia products, yet even after waiting a year or so, AMD largely failed.

Now, Nvidia showcase the Titan V and from a tech point of view, it looks like they have done it again and there’s no reason not to think that a GTX gaming series of cards, based around the same new tech improvements, won’t be released, before AMD even manages to catch up with the current 10 series.

It’s the same issue as with the Radeon Fury, it performs amicably in certain situations, but on the whole HBM just doesn’t pay off for gaming. Developers optimize for having low bandwidth (consoles, etc) available. HBM just drives the price up for little gain.

The RX480/580 on the other hand would deliver good gaming value if it wasn’t for the price distortions caused by crypto mining. At the end of the day, desktop PC market share may be low, but the profits still add up. AMDs survival is tied to profits, not market share.

All was expecting/hoping that AMD would respond and match/better the new Nvidia products, yet even after waiting a year or so, AMD largely failed.

The fastest GPU isn’t necessarily the most important one in the portfolio. AMD could respond by just lowering their prices, but they don’t need to, because their GPUs still sell well enough. The same goes for NVIDIA.

Now, Nvidia showcase the Titan V and from a tech point of view, it looks like they have done it again and there’s no reason not to think that a GTX gaming series of cards, based around the same new tech improvements, won’t be released, before AMD even manages to catch up with the current 10 series.

Titan V gaming performance isn’t that great at all. Also, it’s a gigantic chip on a new process, very expensive to make. There’s no reason to believe that NVIDIA has any major architectural advantages here.

Not to derail the thread, but there are rumors that the next generation Navi GPU line will employ Ryzen tech. like the Infinity Fabric (which, for one thing, will allow for a design that’s far cheaper to make, capable of more power, and easier to make upgrades to).

Considering how well their CPU business is doing now, that can’t be a bad thing. The monolithic die approach still seems to be working for Nvidia, but for how long (when considering that it’s been leading to difficulty for Intel and considering the price of Nvidia chips seem to keep going up)?

If it does as well as the Mining capabilities it will get curb stomped by Vega.