AMD's Vega GPU's; Information about them is starting to come out

The architecture appears nothing short of insane here, considering these highlights.

  • The way these cards handle memory allows them to use a virtual address space of 512 terabytes of memory, and this isn’t a typo.
  • The cards appear to significantly outperform the GTX 1080 in Vulkan-powered games (and may even be the same way for games in general, as the final pieces of silicon are yet to be developed).
  • There was a focus on so-called smart performance, by means of changing aspects of how shaders and pixels are handled so that only the information needed is processed.
  • The way its memory is handled is also optimized for performance (not the least being how HBM2 is far faster than the GDDR memory in most other cards, this is far faster and far less limited than the original HBM in the fury cards.

Think that’s enough, AMD’s roadmap shows another notable leap coming in the years ahead with what is currently known as the Navi architecture (but we probably won’t have any concrete information until 2018 at the earliest).

Anyway, the tech. writer thought that what is known is enough to make Nvidia nervous, and hopefully it will serve as a good punch (because the harder it is, the better it is for consumers).

So to wait for Vega or not?

I hope this is not a super hype from AMD and stuff becomes mediocre in the end

Edit: I don’t like hypes, even Apple can be a let down if it’s hyped too much. And Apple are masters at hyping. Just show us the hardware and price. Consumers don’t really care about the magic behind the graphics cards. Just the performance and price. It’s just a component in a build, not a laptop or car.

Vega will be out in a few months. I’m assuming it will be price and perform similarly to a 1080. In which case, you still get better performance from 2 rx480 vs 1 Vega card.

If you play with your PC one card is always a better idea and H1 may mean June 30.

I’ll be waiting because the RX480 isn’t enough for me and the GTX 1070 is expensive as hell. I hope the cutdown Vega is about 300-350€ matching or beating the GTX 1080.

Frigging miners may get in the way of sane pricing, though.

no cuda, no deal

In some ways I agree, AMD needs to at least fix OpenCL in their drivers before I can get excited. I don’t care about games, and I need to know that it will work for rendering. Otherwise I’m not going to purchase it and will get another Nvidia card. The Octane devs have a CUDA cross compiler for AMD cards, but there are still issues with AMD’s drivers so they can’t release the code.

For me, at least, it doesn’t matter how fast they are, how much memory they have, or how cheap they are. Until these issues are fixed i’m not going to buy. Which is too bad, it would be nice to have some competition.

Is it even legal for AMD to develop drivers to allow their cards to run on CUDA (same with Otoy’s reverse engineering of it)?

CUDA is proprietary, closed, and owned by their biggest competitor, why would Nvidia want to share anything (in a non-crippled form at least) that could hurt their already large advantage in marketshare and destroy their chance of a monopoly (since they are already doing just that with other tech)?

I haven’t really heard one way or another in terms of legality, so Octane fans may very well need to be vigilant.

As far as I know, Otoy has done this with Nvidia’s blessing, but I’m not 100% on that. CUDA is just a language like any other, so what this cross compiler does, is to take CUDA code and compile it down so it can run on AMD hardware, this also includes running it on CPUs and maybe Intel GPUs. I suspect that the problem is that there are some things that Nvidia’s drivers can do that AMD’s can not.

You know, Nvidia could make a killing by licensing CUDA to AMD. :wink:

This is what Otoy’s CEO said about it, “As I have posted elsewhere, we have a full (Octane) 3.0 build working on AMD (via our cross compiler), so that isn’t technically waiting on 3.1 either, but drivers are huge issue we can’t work around yet - AMD has to solve or else we can’t release and support this no matter the version (they know about this and working on it last I checked).”

They haven’t done that, both have basically written source-to-source translators (a.k.a “transpiler”). The one by AMD (Boltzmann project) isn’t even fully automatic and it doesn’t seem feasible to maintain the CUDA code after translation. Also, “clean-room” reverse-engineering is generally deemed legal.

CUDA is proprietary, closed, and owned by their biggest competitor, why would Nvidia want to share anything (in a non-crippled form at least) that could hurt their already large advantage in marketshare and destroy their chance of a monopoly (since they are already doing just that with other tech)?

The CUDA compiler is actually open-source. The runtime is closed-source, but it should be possible to implement a free runtime environment legally (see “GPU Ocelot”, supporting PTX files). It’s just unlikely such an effort will ever reach full compatibility (see WINE). What also protects NVIDIA is the fact most applications ship multiple CUDA binaries built for a specific NVIDIA architecture, instead of PTX files.

The best argument for going with NVIDIA is the horrible reputation AMD has with drivers. Even if they’re great now (and apparently they’re not), if you don’t see a lot of stuff running solidly on the AMD platform, you’ll be wary of investing into it, especially since they’re a tiny shred of the professional market. It’s a hen-egg problem.

That’s also another reason why the Boltzmann project is kind of pointless. Why would I migrate my code to depend on something AMD makes, so that I can still compile my code for CUDA platforms? From a business perspective, that’s insane. Otoy’s approach of translating a subset of CUDA to something else makes more sense, because it doesn’t risk breaking what you already have.

True. Let’s hope that prorender is the best solution for AMD but I don’t think it can keep up with the industry standards.

Boltzmann isnt about building a cuda compiler, it’s a platform to help port cuda code (many projects for gpgpu have already been done with cuda) to help leverage investments already made to also be able to utilise Opencl technology, Not just AMD either.

It just helps convert Cuda to a portable C++ GPGPU alternative that most people say runs just as fast as the original cuda if done right.

Otoy have an Opencl version of their engine but have chosen to use Cuda as their platform due to the fact Apple and Nvidia dont even try to support opencl properly anymore. Hence in their eye’s supporting opencl is a waste of time as those platforms refuse to keep upto date with the standards.

Rather than moaning about AMD maybe Nvidia should be made to support platforms other than cuda properly to give their customers better choice and support.

And if you want to talk about innovation in GPU compute just look at AMD’s new offers, Whole new GPU memory system design that allows GPU’s to access upto 512 terrabytes of not just graphics memory but ssd and main system memory, Pro cards with 1 TB memory support etc etc. That is the way forward, Why would AMd at this point even care about Cuda

OpenCl IF supported correctly can do anything Cuda can, For example Cycles Opencl still sticks to Opencl 1.1 i think, at best 1.2.

Opencl "2.1-2.2 is a whole new beast that once supported is far more the equal to Nvidias cuda but to keep compatibility with older tech opencl 1.2 is mostly used at this point.

Here’s a test branch i did a while back that not working on anymore due to accidentaly deleting the code (beer can be a distraction).

It’s for opencl users:

De noise branch V4 from Lukas (which for Opencl users even if denoise not activated wouldnt compile kernels so i built a branch that you can use with CPU denoise but also GPU opencl rendering still works but MAKE SURE DENOISE IF OFF or you get a blue screen of death) As far as i know Once these patchs are added Opencl cycles will do everything Cuda cycles can, But faster ive been told as SSS on cuda is slower than CPU, yet this is about 20% faster.

Also added opencl GPU SSS and Volumes, Approx AO for lighting after a set GI bounce level

Compositor Anti Alisaing node and a few other goodies. Have to start from scratch though as deleted this branch by accident.

New old build: https://mega.nz/#!FoIiBYoS!qlZaylmDOMRBrjB3Quo2oI4lO5SAXiXK7hWHIbs 2mO4

Also this is a quick nasty test of AMD’s Radeon rays intersection api that powers Radeon Pro render, It’s CRAZY fast:

Except it doesn’t actually use OpenCL. For NVIDIA GPUs, it just translates right back to CUDA. For AMD GPUs, it uses HCC and that only works with a few AMD GPUs, only on Linux, only with a special driver. For that “benefit”, I’m asked to put yet another crappy tool between my code and the hardware, then when AMD stops supporting that stuff (because nobody cared to use it in the first place), I can go “fix it myself” because it’s OPEN. Great deal, eh?

Rather than moaning about AMD maybe Nvidia should be made to support platforms other than cuda properly to give their customers better choice and support.

How? By buying AMD GPUs out of spite?

No bud, Like i was saying AMD have a compute platform that’s progressing OpenCL 2.0+ which is very C++ like rather than opencl 1.2 which is more C like. Big chunk of the reason cuda has been easier to code is it’s more C++ nature but newer opencl implementation solve that.

The Nvidia thing, What i meant was Nvidia users should canvas Nvidia to better support Opencl on Nvidia platforms to give them the customer better choice and higher compatibility. Just because they have cuda shouldnt mean bad opencl support.

But AMD stuff is always a 100 bucks cheaper than nvidia stuff. There is no reason why they would help people buy amd with good opencl support on nvidia. If cuda support gets bad then nvidia users will complain to the developers of the apps… not the drivers to the cards.

I’m talking about the Boltzmann thing (HIP). It doesn’t target OpenCL and it doesn’t let you convert CUDA so that it runs on OpenCL platforms. It is something that by design works more like CUDA, which not only is a C++ like language, it integrates into actual C++ code. That’s why HIP needs to integrate with an actual C++ compiler, in this case HCC.

It’s also closer to C++ AMP, which would compile sections of C++ code into something that can be offloaded to the GPU. IIRC, that never worked with OpenCL either, but ordinary shaders.

The Nvidia thing, What i meant was Nvidia users should canvas Nvidia to better support Opencl on Nvidia platforms to give them the customer better choice and higher compatibility. Just because they have cuda shouldnt mean bad opencl support.

Why would the NVIDIA users care, when there’s no OpenCL2 applications? There’s little incentive to write OpenCL2 applications in the first place, because that doesn’t increase the target audience significantly over CUDA and because it’s not proven to work well. AMD needs to prove the platform is stable and it needs to increase its installed base. Like I said, that’s a hen-egg problem.

If I still believed GPGPU platforms were attractive, of course I would hope that OpenCL2 wins out, but I don’t see it happen.

Pay the extra 100 and get a GPU that actually works. I can’t believe that there are so many Radeon apologists on a Blender web site - where it’s taken them so many YEARS to get OpenCL working - even with help directly from AMD.

Read the rest of my comment.

You have AMD Gpus why not go for Luxrender new Luxcore Api ? It’s so fast and more mature than most opencl renderer apps ( Vray / Indigo ).I use myself 2XR9 390 everyday for serious work on Blender+Luxcore aka luxrender.