Because games and DCCs have different requirements and different design goals. Vulkan shifts the maintenance burden towards a developer, quite severely. It’s very verbose and very unforgiving. It suits very well projects of limited scope and particular rendering approach (games), but for applications that vary their rendering and data representations substantially (think different viewport modes and different object modes) it can become quite a challenge to solve.
Yes, decent Vulkan code will probably run faster than equivalent decent OpenGL code. But achieving the former is a lot more work and problem-solving that the latter. Even viewport display will be a quest of its own, let alone real-time renderer.
Even though performance is essential for games, it is also very important for DCCs. Less CPU usage can mean that you may work with more object and edit meshes with more vertices. That’s quite important.
The question raised is that OpenGL or Vulkan, blender still has some of its parts processed by the CPU … and it is what currently creates the known bottlenecks …
Perhaps it would be more useful to first move everything we can move to GPU …
They could at least leave open the possibility of adding patches currently under active review. If the developer does not respond to critique in a day or so, let the patch see 2.83 as the earliest target.
Though there’s nothing wrong with OpenGL for what Blender currently does, it’s at the end of the road. The guys gambled on OpenGL coexisting with Vulcan, and they lost the bet. It wasn’t clear at all at the time that Vulcan would go on to push out OpenGL in pretty much every new application or game.
Whelp, it doesn’t seem like a raytracing extension for OpenGl is forthcoming, and we do want viewport raytracing, so Vulcan it has to be.
Now that’s what I call dedication. @dfelinto is not just working on Blender, he digitized himself to become part of the code in Blender. Move over, Neo!