GPU Open; AMD's Open Source answer to Gameworks?

Since I’m GMT+2 I see it as posted on the 1st. :wink:

Wccftech can read like an April Fool’s joke on a good day. That is especially so when you notice the likelyhood of many users not reading the articles at all in favor of the endless Nvidia vs. AMD slugfest in the comments.

That is why I’ve been making more of an effort to not link to that site at all, because there’s other sites that might have a more accurate analysis (even if it was sourced from a Wccf story).

Hey Ace (and others).
Indeed Radeon Rays is meant to be an platform independent raytracing library, rather than locking you to one vendor. It should be noted that this forms the basis for the ProRender software.

What we announced recently was two things, the Vulkan-EZ library a highlevel wrapper to Vuklan for game engines and viewport type applications, and a library for using this for first hit and then raytracing effects like shadows, reflections, etc. Anyway this is directly meant to be used in 3d viewports similar to EEVEE as you can see from the presentation.

If you have any questions about this we’re listening :wink:

The driver issues regarding OpenCL were on Blender, not AMD. It took AMD to write the new stack for Blender to show them how to write OpenCL optimally and correctly.

Not entirely true. Dig through the bug tracker and mailing list emails. A lot of applications where having trouble with AMD’s OpenCL stack. That’s why they devoted time to it. The difference is that with Blender, they actually have access to the source code and can therefore make more proactive changes.

Here is a link to the developer of Luxrender complaining about AMD’s openCL compiler: http://www.luxrender.net/forum/viewtopic.php?f=34&t=11009

Luxrender still is one of the best openCL renderers, but it obviously took some hair pulling to get it working.

AMD’s shoddy compilers are not blender’s fault. The fact that it required a team of AMD developers over a year to fundamentally restructure the core cycles code to work on faulty AMD compilers speaks more to AMD’s failure than blender’s.

Well, nobody’s perfect. Also, don’t think even for a moment that NVidia team is any better with their tech, they just hide their mistakes more efficiently, since they are as closed as pharaohs tombs. Yet, even they are disappointing when it comes to dealing with higher level complexity or openCL compilers.

Finally, we’re all mere humans making mistakes and we’re developing gradually.
The main difference is, some turn to War for resolving their problems, while others take the responsibility for their actions and fix them in Peace.
Can you imagine whole world as an open, sharing community?
Nah, boring… :ba:

AMD have to release this Vulkan wrapper and all related code to fire rays or pro render. We were promised pro render would be open sourced but yet years later that hasn’t happened, what they actually did is re brand Baikal render done by damitry koslov as an experiment that when for example I was testing pro render closed beta for Blender their own AMD staff running that had no idea what it was.

Now on the GPUopen page that’s now being called Open pro render? Big fucking mess. I was lucky enough to be sent a AMD firepro W9100 by Damitry when fire rays was just released and the experimental opencl path tracer he was working on first appeared. I worked on this also with a realtime renderer in mind from the beginning, so i more worked on how I thought realtime rendering could use this mixed with rasterization. I actually had to ask the AMD devs on the Pro render closed beta to take a look at Dimitry’s work as they had no idea as pro render devs it existed. Now it’s rebranded as pro render? This was an old video on one card with old gpu (even if was a firepro):

AMD f you say your going to open source something then do it, Dont then realise you got caught out and rebrand something else under that name to try to cover it up. This new rasterization and raytrace engine with a vulkan wrapper now seems to only be for AMD partners. all this is a major mess.

It’s quite simple, they have more money to hire and maintain(the positions of) developers.
To quote Ballmer “Developers, developers, developers”.

Am well aware of economics… yet even hundred billion gazillion people haven’t wrote Hamlet or made alternating current possible. Made by mere passionate individuals :wink: There are much more developed and devoted minds in academic circles, yet we all know where to the money flows :rolleyes: What i wrote above is simple - any effort to defend or promote one or another entity is pointless (same as in politics) - there’s no good choice when both are bad. So sometimes, the community must intervene.

AMD technically has better ethics, but PC power users have a tendency to gravitate towards what they know will deliver the performance without too much trouble. It’s just that Nvidia has had a better track record with their drivers (even though some versions have come with their own notable issues as well).

Though it is possible that part of it is due to people optimizing and writing for Nvidia first and then worrying about AMD (since they have the largest marketshare). Though in areas like some GPGPU stuff, AMD really has been in need of improvement.

Disclaimer; I do have an AMD GPU that is several years old now, it has worked for what I needed it to do (though I hope to upgrade to a new PC with a much better GPU soon).

Oh so true… and i know, i’m part of minority as i prefer complex over primitive for most of cases (note: to simplify - first, complexity must be reached). And seeing tech go back to biased engines & techniques i’m personally left somewhat disappointed.

But still, the advantage in all of this is that artistic skills will again gain traction & value :wink:
For more than a decade i’m listening: “Why don’t my renders (RT) look as good as yours (PT)…?”
“It’s about precision, baby!” (in Austin Powers voice)… and it looks like the trend will continue :smiley:

All in all, all good… still. :yes:

Let’s not conflate ethics with marketing. Nvidia does have bigger market share and more cash for R&D. This influences the business decisions they make. AMD has a smaller market share and less available cash. This influences the business decisions they make.

Nvidia is in a good position to push a closed standard, they have large market share and a ton of cashflow to develop a refined end to end solution.

AMD is not in a good position to push a closed standard. They don’t have the market share to pull it off, or the R&D to execute it smoothly, so they have to push open standards. It’s a way of all the littler fish banding together against the big fish.

Don’t delude yourself into thinking that they wouldn’t do exactly the same thing if they had the cashflow that Nvidia does. AMD is not a non-profit, it is not a charity, it is a for profit company that just happens to be doing a worse job of it than Nvidia is.

Well, not knowing there’s shit at both ends is kind of a blessing.
But i wouldn’t go as far as to generalize and say one is better than another - it’s all about specific needs.

… and fanboyism supplied with an ‘old habits die hard’ is kind of a heavy sickness.

Blender IS pushing for open standard and should use it.

All this “open” branding is just more would-be robin-hood marketing. Most of this stuff isn’t particularly interesting, some of it even requires FirePro hardware. There’s no way to make any money off of it, so of course they can just dump it on Github under a FOSS license. Guess what, NVIDIA does exactly the same under the Gameworks brand.

NVIDIA also has a lot of open source stuff that nobody really cares about. Those things that NVIDIA doesn’t open-source are actually valuable or otherwise strategically important. AMD just doesn’t really have that, except maybe their Windows GPU drivers (which of course aren’t “open” at all).

The best-working option generally isn’t an “open standard” though, it’s DirectX on Windows, Metal on Mac OS and CUDA on NVIDIA hardware. Ideally, Blender should use whatever works best. The selling point behind the “open” interfaces used to be cross-platform support, but in reality you can end up spending more effort towards debugging wildly different implementations than writing towards the best supported API (your mileage may vary).

Utter nonsense! By the time AMD planted their developer for Cycles, they were already on (afair) the third iteration of their OpenCL driver stack. That’s when it actually started working with larger kernels. Before that, pretty much everybody who attempted to do large kernels either gave up on AMD or OpenCL altogether, or had severe difficulties. The NVIDIA driver was much better.

The idea that a program like Cycles could even be made to be “optimal” or “correct” is pure fantasy, there are way too many factors in play. The AMD developer split up the kernel so that it would actually compile, but the megakernel generally performed better, at least on NVIDIA hardware. This is an area of active research, you can easily find anecdotes of megakernels underperforming in other renderers. The choice of language shouldn’t even make a big difference here (most of Cycles OpenCL code is used on CUDA as well), but in practice it does, because the compilers are different.

To illustrate how big of an issue this is, consider that Octane developers went as far as creating a CUDA transpiler so they can run on the better developed DirectX/HLSL stack instead of OpenCL. It’s been a while since that was announced though, so that attempt probably failed.

Either way, the solution can’t be “you get a dedicated AMD developer on your team” just so that your product can work on AMDs OpenCL stack. You might as well use CUDA (or even OpenCL on NVIDIA) and capture “only” 80-90% of the market.

not all of us play primitive games with primitive toys
and most of the time there’s a need to kick all sides around to get their act straight…
btw don’t get lost in either direction as it certainly leads far from others

No, you play sophisticated games with sophisticated toys.

and most of the time there’s a need to kick all sides around to get their act straight…
btw don’t get lost in either direction as it certainly leads far from others

I’m simply not buying into AMDs “we are the victims of evil business practices, that’s why we failed” narrative here. AMD/ATI has been consistently providing unreliable software for their generally good hardware since time immemorial. That just doesn’t fly with developers that don’t make videogames. If you make an important videogame, AMD will fix their drivers for you (on Windows, of course). If you develop specialized software for maybe 100 users, you’re out of luck.

AMD has earned their failure, NVIDIA has earned their success. NVIDIA made a bet on GPGPU early on, they made their hardware fit that vision at the cost of making it less competitive for games (which is why Fermi GPUs were so good at GPGPU). Maybe the problems with OpenCL on AMD were actually due to incapable hardware for which no reliable driver could ever be produced, but if that’s the case, that could’ve at least been communicated to the developers who wasted their time on it. Instead, you got opaque failures from compilers running out of memory.

Believe me, there are a lot of people across industry and research that aren’t happy about the fact that almost all relevant GPGPU software is either “CUDA only” or “better on CUDA”. At the end of the day, these people need a reliable solution. Maybe AMD is reliable today, who knows. The software has already been written, the hardware has been installed. AMD now expects you to convert your CUDA code (manually!) with their own translator so that it runs on their (non-OpenCL) compute platform that only runs on Linux. Good Luck with that!

AMD has a different approach when it uses OPENCL computing. For best effect, use AMD graphics and CPU.