Nvidia Titan v

That’s not a rumor, but a fact. Navi is a modular design with Infinity Fabric and HBM3 memory. It’s been known from the beginning it would be a multi-gpu-core on a die solution. Instead of one monolithic core having 8, 16, 32, 128 small cores, etc., sharing HCC and Infinity Fabric will dwarf any capabilities of Vega. It will also free them up to specialize cores for Mining, AR, Machine Learning, for SP/DP, FP, 16 bit half precision, etc.

Days of one giant GPGPU are over.

Saying the demise of AMD is pretty funny. You don’t need to have the fastest GPU to win.

The issue is, the only thing that many core gamers care about is FPS. That is why for benchmarks, AMD pushed the Vega architecture as far as it could go (even though you could do a few tweaks in Wattman to cut the power and heat significantly without giving up too much performance, see Bliblubli’s posts in the Vega thread).

They don’t care about the overall design detail, the price/performance ratio, the feature set, and the future potential, they just want to see the frames. To drive that point home, you have gamers (according to their communities) making hardware choices based on if the resulting FPS increase is by as little as a few percentage points.

Yes, I agree, for the enthusiast, reviewers and people with 1000+ budget, FPS is the most important aspect. However, most consumers are only willing to spend $200 for a graphics card. RX580 is very popular in that spot, but crypto miners are more crazy for it. Also, Intel and Apple are both going to use AMD Vega with HBM technology. I think this pretty much solidifies AMD future for the next 3 years. It doesn’t have to have the highest FPS card at 3K to beat Nvidia. The way I see it, they are doing very well for themselves. CPU sales are up, and GPU sales are also up. Nvidia can make more cards faster due to the lack of HBM, but AMD isn’t losing money like last year. In the near future, Global Foundries processing node at 7nm is ahead of schedule and looks to beat Intel 10nm. So I expect Ryzen 2 to beat Intel and Navi to close the gap significantly with Nvidia next GPU. Volta is just too big of a chip for high yield manufactoring.

Also keep in mind, if you’re not doing AI and just need general compute performance, you still get more performance from 3 Vega 64 cards at $1500 vs 1 Volta card at 3k.

I dont get it, 3K for a 12GB HBM2 card while AMD came out with a 16GB HBM2 card for 750 USD, I am pretty sure Blender would render allot faster with 4 AMD cards than a single TitanV!
https://pro.radeon.com/en/product/radeon-vega-frontier-edition/

Come on people. Please try to understand that this is a specialized card for a specialized market that is going to use it for very specialized programs. most of the code that this card is designed for will be programmed specifically for its architecture to take advantage of its highly specialized nature.

That’s like looking at a semi truck, and wondering why it costs 4 times more than a mustang GT, even though they have the same horsepower.

The AMD card is not a game card either, its a workstation card, with 16GB HBM2, and for 750 USB, the Titan V cant be that much better for 3K!

Even more specialized than a workstation card. This card has more in common with the Tesla line than the Quadro line.

Look at this graph:

source: https://www.anandtech.com/show/12170/nvidia-titan-v-preview-titanomachy

The red line that isn’t present in the previous models of titan is their tensor core performance at fp16 precision. It’s a very specialized set of calculations that has a lot of potential for neural networking and other very specific sets of calculations. So specific in fact that you will need to code for it. You can’t just fire up blender and get a %10000 percent performance boost with this card. But if you write code that can utilize massively parallelized fused multiply add operations, you can see an unprecedented performance boost.

its a scientific card, which is cool if you need that kind of precision, medical applications come to mind, pretty useless for Blender though!

Exactly. I could see some photogrammetry programs that could leverage those tensor cores someday. The technology has a lot of potential, if you are willing to code it.

Buying a higher-end compute module is nearly impossible. Pretty much all reviews are absolutely littered with this FPS garbage that means absolutely nothing to OpenCL/CUDA users. It’s pretty frustrating, really.

It’s still bizarre to see tech(?) people here referring to frames per second as FPS. FPS stands for First Person Shooter, always has been that way. Frames per second is “fps”, small letters, not capital. Get you sh!t straight people.

Some Cycles numbers:

http://download.blender.org/institute/benchmark171221/latest_snapshot.html

Nvidia limits data center uses for GeForce, Titan GPUs

Nvidia made a change to how it lets developers use its chips, and some folks aren’t happy

  • The change is meant to restrict the use of Nvidia’s GeForce and Titan graphics cards.
  • Nvidia says its Tesla graphics cards, which are meant for data center use, come with support and other enterprise perks.

& Users believed it’s up to them to choose the use… Free country? Free market?
Most likely it will be up to courts to decide…

One year and a half ago I was accused of misogyny…
Good ol’ days. XD

Actually they should have tested it as OpenCL device, its twice as fast vs AMD Vega 64 at least in Luxmark.

The GP100 is nearly twice as fast for Barbershop scene and the 1080Ti is about 20% faster than the TitanV. Are those bench correct? Was it investigated why such a huge difference compared to other bench results?

If people find they can use the courts to force government micromanagement of companies, it will open a can of worms that could clog the system for years, if not decades.

There are more important cases to work on than having them worry about this type of corporate policy, and everyone still has AMD as an option.

I know there are a ton of blender users that are really upset that the EULA doesn’t allow them to use their $3000 card in their 2 million dollar data center.

Those licensing restrictions are really irrelevant for anyone here, or anyone using the card for scientific calculations.

They only really effect google and amazon.

:rolleyes: with such mindset industry should never adopt blender, but it’s happening

So is a render farm (or a mining center) a ‘data center’ - cuz in essence, it is.
Claiming that EULA capped sales of same tech got nothing to do with mid or small business seems just an ignorant thought? IMHO, there is where it hurts the most.

& even manufacturer should not order users what to do with it’s product once sold

But, yes… sorry, went off-topic.