Games for *different* architectures

(justwannapost) #1

…OK, so now, since Intel has jumped into the fray with their Xe realtime-raytracing cards, and AMD’s Navi is due in a few months, this means that there are a whole lot of different GPUs out there that one can make games for. I think that learning all these different architectures is not humanly possible for a single programmer, even for the best programmer in the world! So, the way that I think it’s going is, now… we’re supposed to use engines to create our games, not actually write them with a code editor from scratch or something. …Right, now when I take a look at Armory or Godot’s page (btw, I’ve never used them), it says “generates EXEs for Linux just fine”, but it doesn’t mention which GPU it’s generating the code for!! So, does this mean that we’ll need 3 separate EXEs of our game, for each of Nvidia, AMD, and Intel architectures? …and if so, how’re we going to get them, out from the ONE game that we’ve actually made, in the engine?!! Of course, the relevant EXE must take advantage of the facilities of THAT architecture, otherwise the whole reason for HAVING 3 of them becomes meaningless, ie. if Intel’s card has a “better” raytracer under the hood, I expect my game to look better on Intel,…right?
So - how does one do this?

Thanks for your help, and let’s celebrate the fact that we’ve now got 3 architectures to pick from, because if those 3 are competing with each other, the only way the card prices are going, is…Down :smile:

(LordRaven) #2

Game engines hide this part from you.
The engine only creates one EXE per architecture (Win32, Win64, Linux, MacOS). When it runs, it detect which graphics API is present and uses that. Win: OpenGL/Vulcan/DirectX, Linux: OpenGL/Vulcan,
MacOS: OpenGL/Metal.
OpenGL/DirectX/Vulcan/Metal in turn use drivers to access the hardware.
As you can see there are several layers of abstraction:
You -> Game Engine -> Graphics API -> Driver -> Hardware
So you never have to concern yourself with this, because this problem is taken care off by the abstraction layers.
(game engine usually have dynamic libraries for each API they can load).

In short: When you use a game engine, you don’t have to care about GPU architectures. The game engine/graphics API does this for you.

2 Likes
(justwannapost) #3

Right, but… it will take advantage of whatever card the user has, right? Like, any special features or whatever, as I said above…?

(Daedalus_MDW) #4

i would say, unless you went out of your way, “special” features will be avoided to improve maximum compatibility.

its the standards that are prioritized.

(justwannapost) #5

Then what is the point of another card?? If the game looks the same across all architectures, then why should a user opt for Manufacturer B for their graphics card as opposed to Manufacturer A??

And how WOULD I “go out of my way”, if I so chose? :smile:

#6

I seems like it’d be the same as with web browsers. 98 percent of the web looks the same whether you’re using Firefox, Edge, Chrome, whatever. Doesn’t stop people, even those who would never notice the 2 percent that’s different, from arguing all day long about which browser they identify with. People love to pick teams.

And if you’re a company releasing browsers or video cards or whatever, there’s usually a reasonably good chance of money in there somewhere, otherwise you wouldn’t jump into the fray. :slight_smile:

1 Like