Vulkan GPU rendering

Using tons of draw calls is not “technically advanced”. You can actually do more draw calls on consoles than on D3D11, the consoles had low-overhead APIs first. You can also access some hardware features there that aren’t exposed in D3D12 nor Vulkan.

Low-overhead APIs don’t make your GPU faster, they make interfacing with your GPU faster. If there’s no significant API overhead in your game/application, then there’s no performance benefit from using Vulkan or D3D12.

This is complete nonsense. There are critical security errors in major free software packages discovered almost every day. Those are usually simple programming bugs (famous example: Heartbleed). A malicious actor could put one of them in and have complete plausible deniability. You’re assuming people actually read and audit all that code (not really), but in reality those bugs are often years old. Remember the WannaCry/EternalBlue exploit, which was based on Windows Networking (SMB)? It turns out that the FOSS implementation SAMBA had a very similar security hole

If Linux is secure already, why is there Grsecurity and SELinux for security hardening? The idea that FOSS makes software magically more secure is false and dangerous. The converse (closed source makes software secure) isn’t true either.

With proprietary software however, you cannot check what’s in the code… so how do you know the devs didn’t hide means of spying on you or facilitating viruses or an OS kill-switch?

How do you know there’s no means of spying in the firmware of the dozens of components in your computer? If you’re concerned, you better watch your ethernet packets…

I don’t like relying on the idea that a program will go away if the group behind it abandons it or ceases to exist.

Fair enough, but graphics drivers aren’t evergreen software. You’re gonna buy a new GPU at some point, it will need new drivers.

You’re more concerned with ideology than functionality, and the world will never bend over backwards to accommodate you. You are limiting yourself as an artist, and as a creator in general. Neither company cares about your desire for “open” standards. They care about pleasing their shareholders.

To be fair to Mircea Kitsune, all of this talk of pragmatism in light of getting a job done is only relevant if you’re a commercial artist (and even then, it’s still not always relevant). Fine art, on the other hand, is built on ideologies. To that end, I don’t think it’s fair to disparage anyone who takes an ideological stance when it comes to producing their work. Everyone draws their line somewhere (in my case, I’ll make exceptions for situations where options are extremely limited–like some hardware–but I like to consider myself an exclusively open source creative… and I can make a decent living even with that requisite). Even though the location of that line is completely arbitrary and varies per person, everyone is in a different place. It doesn’t do anyone any good to take that lightly.

Nobody is getting disparaged. Everybody should have their ideologies questioned. They’re always full of myths and false ideas.

It’s impossible to use any mainstream computer today without running undisclosed proprietary code. Why draw the line at graphics drivers? It makes no practical sense. What about the BIOS on the GPU that the driver talks to? That’s not going to be FOSS either.

The funny part about people talking about how you shouldn’t base your decisions on ideology is the fact that even their decisions are partly influenced by it.

Having a tinge of ideology in things that you do and support is pretty much impossible to avoid simply because of how the human mind works (unless you are actually part Vulkan or something). There’s people out there who claim their knowledge of the facts lead them to believe that Blender is so abhorrently bad that the very thought of using it disgusts them (even today in the 2.7x era, look at the communities of other programs like Modo and Lightwave for proof), yes here we are asserting that Blender today is worth using for professional work.

The only point I made here is that: If the developers of a popular application or library tried striking a secret deal with a government, to implement a hidden function that sends the users private data to a third party (for example), it would be much harder to do if the source code is available. That’s because if the code is FOSS, you can expect people to see it on a daily basis… it’s only a matter of days till someone takes a look at the files on Github and says “what’s this”, then points it out to everyone and the owners of that program lose their credibility forever. In a proprietary program however, the secret function can be much more carefully hidden in the code… it will only be discovered in the unlikely event that someone decodes the binary and generates obfuscated source code, and because of how messy that is the developers can even then say “you got the wrong idea, it’s just a glitch we didn’t notice”.

I never said that FOSS software is “magically secure”. I do however believe it tends to be more secure due to the sharing of code and ideas across many more departments, and also because an OS like Linux uses a slightly better structure compared to Windows. One should wonder why viruses for Windows exist all the time, to the point where everyone has a cough proprietary antivirus program installed… yet viruses in Linux are usually a myth, being so rare we aren’t even sure how many exist in the world. Security flaws are present in every OS though, and it would be silly to claim that even Linux is 100% secure.

No fundamental disagreement here… but questions end in question marks. Up until now, all of the responses in this thread have been statements and assertions.

Like I said, everyone has their own line in the sand… and it is certainly arbitrary. If I decided tomorrow that opposable thumbs are evil things and henceforth, I’ll never again paint with my hands. I would absolutely expect people to question me as to why and tell me that they wouldn’t do it that way. However, if I ask for thoughts on an idea of how I might be able to paint using my feet, I’d appreciate it if folks gave ideas about that rather than just saying I should just pragmatically decide to use my hands. That’s all I’m saying.

In this case, Vulkan probably isn’t the right fit for general purpose GPU computing (like in rendering)… and it seems that most people are also of that mind. That particular method of foot-painting probably isn’t going to work. This discussion could’ve easily ended there (e.g. “Sorry, Vulkan isn’t the right fit. OpenCL is the existing standard, it’s just not quite ‘there’ yet.”).

You’re repeating the same wrong thing in less strong words. What you claim is simply not true, it’s a fantasy. I gave you the example of Heartbleed: There was critical security issue inside a very widely used FOSS transport encryption library (OpenSSL) that was undetected for years. There was debate about whether it was an honest mistake or a plant, but there were no strong consequences.

To find some of these errors, reading the code isn’t even necessary. It’s just a bunch of work that somebody has to do. Who is more likely to do it? Unpaid FOSS developers or profit-driven criminals and government agencies?

I never said that FOSS software is “magically secure”.

The process by which FOSS becomes secure according to you is magic. It doesn’t really exist.

I do however believe it tends to be more secure due to the sharing of code and ideas across many more departments, and also because an OS like Linux uses a slightly better structure compared to Windows.

Instead of believing that fairy-tale, you should learn a bit more about how security issues arise in the real world.

One should wonder why viruses for Windows exist all the time, to the point where everyone has a cough proprietary antivirus program installed… yet viruses in Linux are usually a myth, being so rare we aren’t even sure how many exist in the world. Security flaws are present in every OS though, and it would be silly to claim that even Linux is 100% secure.

The answer is simple: Why would anybody waste their time writing a virus to target the rather tech-savvy 1% Linux users (or even the 5% Mac users) when 90% of users are on Windows? Linux also makes it somewhat difficult to actually run programs, but that’s a usability fail, not a security feature. Viruses on Linux are not a myth, they’re just rare.

If there’s a code-execution vulnerability in (for example) your Firefox browser that you run on Linux, there is no mechanism in a standard installation that protects you any more than what is enabled in a standard Windows installation. For something like WannaCry, the only thing that would be more difficult to implement on Linux would be to display the ransom screen. All your files could be encrypted or deleted with simple user privileges.

Now, if you want to bother to run hardening features (SELinux etc) or sandboxing, then there are options on both Linux and Windows. Those features couldn’t be enabled in standard installations because then a lot of programs simply don’t work.

There could exist exceptions in everything I said, however I still stick to my belief that the points I made are most often correct. Honestly it’s common sense… unless someone doesn’t understand how gruesomely difficult it is to detect hidden functionality in compiled binary code compared to clean source code that’s readable to any user.

If a major but deliberate flaw really went undetected for years in OpenSSL, that is an impressive stunt and someone should really ask a few questions. I would suspect a honest mistake the most, not because of my “fantasy” but because stuff is simply harder to intentionally hide in plain sight! That’s one of the most major libraries, and probably has hundreds of people looking at the code and / or new commits every day. Also ask yourself: If OpenSSL was closed-source, are you sure it wouldn’t have gone undetected for even longer… once again because you couldn’t see the problem written in the code?

I can in fact offer my own example of open libraries being insecure though: About 1 to 2 years ago, some major vulnerability was discovered in the image handling library ImageMagik, which apparently made it possible for an attacker to download private files off a server. The site Furaffinity was hit by this, with someone leaking its source code and potentially user passwords and selling the leaked stuff on USB sticks at a convention. The site went down for several days I think, then every single user went through a grueling password resetting process which required 24/7 intervention from the admins. So yes, I’m aware that attacks and vulnerabilities exist and not everyone notices everything all the time.

Quite the opposite:

The author of the change which introduced Heartbleed, Robin Seggelmann,[172] stated that he missed validating a variable containing a length and denied any intention to submit a flawed implementation.[11] Following Heartbleed’s disclosure, Seggelmann suggested focusing on the second aspect, stating that OpenSSL is not reviewed by enough people.
(emphasis mine)
A further quote from the same wikipedia page:

Think about it, OpenSSL only has two [fulltime] people to write, maintain, test, and review 500,000 lines of business critical code.

Just because something is frequently used by millions of people doesn’t magically make volunteer developers show up from thin air to do unpaid code reviews.

Having hidden functionality is a different topic. It’s not necessarily that difficult to detect certain kinds of functionality for someone skilled in reverse engineering, unless the program is heavily obfuscated (which raises a red flag in itself). However, it’s simple to detect functionality when it is activated. Is the program trying to open files/sockets? What traffic goes out? Is it encrypted? Ideally, you shouldn’t have to trust a program, it should run with the minimum privileges possible. That’s where both Linux and Windows fail completely in their default configuration, but that’s required to run lots of real-world applications.

Also, if you’re really concerned about this, you should be concerned about all the firmwares in your computer, whose behavior is much harder (if not impossible) to observe.

If a major but deliberate flaw really went undetected for years in OpenSSL, that is an impressive stunt and someone should really ask a few questions. I would suspect a honest mistake the most, not because of my “fantasy” but because stuff is simply harder to intentionally hide in plain sight!

Bugs usually aren’t found by looking at code, they’re found by running it. Programmers can’t run entire programs in their heads, they have to do extensive testing (and often they don’t).

In your fantasy world, you must be picturing programmers as responsible experts that employ due diligence at every course. Nothing could be further from the truth.

That’s one of the most major libraries, and probably has hundreds of people looking at the code and / or new commits every day.

You’re wrong, there. Lots of critical libraries only have a handful of maintainers at best, including OpenSSL.

Look at this list. Each and every one of those items is an error that can (in theory) cause arbitrary code execution with varying amounts of effort. It happens all the time. You’d think that if such bugs were discovered just by the developers looking at the code enough, those bugs wouldn’t ship.

Also ask yourself: If OpenSSL was closed-source, are you sure it wouldn’t have gone undetected for even longer… once again because you couldn’t see the problem written in the code?

I’m not arguing the opposite (“closed source makes software secure”), that’s bullshit as well.

I guess there can be such cases of absurdity here too. And I say absurdity because well, there’s often so much content posted ever day on some websites that you ask yourself how so many people spend time working so much without running into physical issues… so when there’s a library used by millions, you’d think there are dozens of folks bawling their eyes at every new Git commit each day.

Do you see now, that whenever you use a computer you are doing so in the blind trust that somebody, somewhere, has done their homework and not introduced malicious code, regardless of whether it’s open source or not?

Good code is easy to read and hard to write. Bad code is easy to write and hard to read. As a consequence, we end up with lots of bad code and very few code reviews.