Based on some other posts, there is different train of thought with regards if blender needs or doesn’t need Vulkan (and OpenGL being enough) So instead of flooding other pages this is where we can try to discuss the differences.
Based on Kronos Group:
"Vulkan is a new generation graphics and compute API that provides high-efficiency, cross-platform access to modern GPUs used in a wide variety of devices from PCs and consoles to mobile phones and embedded platforms. "
Based on everything I know Vulkan is a replacement for OpenGL being more efficient on new hardware.
From Blender perspective primary goal (based on AMD’s funds) is to replace OpenGL in Viewport and potentially Eevee?
Moving everything we can to the GPU is a very different topic.
OpenGL won’t advance significantly. It is from that perspective a dead end. Vulkan is going to advance further as it is the replacement for OpenGL. The investment is going to be necessary at one point and I don’t see a reason why it should be delayed.
Besides that, AMD is literally paying for it. They accepted the fund, because it is the way forward. Now, they have to invest the money into that.
Like I mentioned in the other thread, you seem to have a very narrow (shallow?) view of the subject. Vulkan in itself is not, and cannot, be more efficient. It’s an API. It allows you to develop applications that can leverage hardware more efficiently. Meaning that an application has to be structured in a particular way. If it isn’t, you won’t get any benefit at all, and may even regress. And many problems that were solved (i.e. hidden) by a graphics driver with OpenGL will have to be solved instead by application developers - Blender’s developers.
Blender’s transition to Vulkan, when it happens, will likely take months if not years. Some of the core systems will need to be redesigned and rewritten. Some things that are possible today may become impossible during the transition.
Out of curiosity, is there at all a difference between using Vulkan and simply NOT USING HW ACCEL?? (I keep hearing “low level” “low level” “low level” over and over again…)
I agree on the fact that it will take a bit of time to implement Vulkan. No question on that.
Still as stated above, OpenGL will not supprot RT cores, Vulkan does (based on what at least nvidia is posting) So it is almost a “must” to mgirate from openGL?
Again we are addressing jsut the viewport and potentially Eevee, so having that is definiltey a good investment.
As for performance. I’ll dig in a bit more, as I’v read somewhere how Vulkan can process a lot more then OpenGL allowing larger scenes in viewport. Will put in a link when I find it.
Side note: please lets leave some comments out of this post “have a narrow (shallow?) view” is not constructive . Can we agree on that?
I was thinking about this while I was writing it … probably when Vulkan will be implemented, it will be more natural and functional to transfer the functions from CPU to GPU …
or rather, create a parallel system … without breaking the “old compatibility” OpenGL-CPU
intresting point. Unsure how Vulkan code will be “tansferable” between devices CPU to GPU. That’s what OpenCL was designed for. Unless they will migrate this intecompatibility from OpenCL to Vulkan?
I know for sure because I have read some comments of the devs, that Cycles will not be transferred to Vulkan, at least not in the short term.
Therefore Vulkan will be adopted to benefit EEVEE …
In the early days I think that a hybrid system, OpenGL-Vulkan, that works simultaneously, could be cheap, and only “accelerate” those components where it is really worthwhile to use Vulkan … and over time, more slowly, everything will be transferred what was in OpenGL on Vulkan …
Whether nVidia brings their extension to OpenGL is up to them, I wouldn’t go as far as to state that it will not happen. Neither do I see how it should be a “must” as far as migration is concerned, especially given that that’s vendor tech.
Simply skimming through buzz articles won’t provide a good perspective on what’s actually going on. I would instead recommend watching a few technical presentations. Blender is a very complex real-world application, artificial tests and examples have little relevance here. It’s unreasonable to expect Blender to become much faster when it switches to Vulkan. As a DCC, it goes way beyond the scope of Vulkan’s intended design. It will still need to (re-)evaluate scenes on the CPU and transfer results to the GPU. It will need to tackle GPU memory management not just for rendering, but for viewport display as well (running out of memory when rendering you can at least switch to CPU render; running out of memory while modeling or previewing though - that’s a nasty) - i.e. managing resources on the fly while user interaction is unpredictable. In short, Blender isn’t a game, and as such, may not benefit as much from Vulkan as far as performance is concerned.
That is not to say that the switch is pointless, of course, as access to multi-device utilization and new tech buzz-words will be a huge boon for sure. It just irks me a little bit whenever I see posts like “Vulkan is more efficient therefore we should use it”. It requires a lot more insight and expertise to even consider decisions like that.
Blender, like many other 3d applications is working very much like game engine (or the other way around). I am not sure what makes you believe that the resource management in game engines is more predictable than the one in Blender. Could you clarify that?
Err, no. Blender isn’t working at all like a game engine. On the surface it may appear similar, but it’s in fact very different. It operates on, and synchronizes between, multiple different data representations (displayed mesh is not the same as edit mesh, or sculpt mesh, or physics mesh…), games rely on a very limited set of data representations optimized for memory transfer or rendering - that’s where polygon and texture “budgets” come from, for example; Blender needs to track changes for undo (few games boast that, and those that do have stories to tell on how that was achieved, e.g. look up Braid) - meaning that even if some evaluation is shifted to the GPU for performance, there still needs to be a transfer back to maintain coherent state. It needs to have a complex graph to track, calculate and resolve changes between objects (games usually have much simpler graphs). It lets you, at any time, make any change and propagates that change throughout the scene; whereas a game has a limited set of actions, and most responses to those actions are either single or a small set of pre-determined responses.
In Blender, or any DCC really, you can add thousands of objects at will, load or delete hundreds of textures, completely change half of your scene by just tweaking one setting in a modifier, making a paint stroke, or extruding a single vertex. Just marking a single edge as a UV seam or sharp for the GPU means partially duplicating and rebuilding geometry - i.e. a data transfer. The application cannot really anticipate what will you do next. Maybe after your next keystroke a gigabyte of textures has to be written into GPU memory? Or perhaps you just jumped 20 frames of animation back, and now it needs to reconstruct your scene both in memory and on GPU? Or you just hit “Render”, but half of your VRAM is clogged up by viewport display resources?..
Games work differently. Most of them have pre-built scenes and asset management schemes tailored to particular gameplay. Their asset pool is bounded at runtime. They partition and use memory in a predictable fashion. Their shaders are pre-built and combined usually once at load time (or sometimes reconstructed at convenient time). They will pre-load and recycle assets as needed. They have straightforward simulation progress. Of course there are exceptions, but those are few and far between, and as I mentioned before, quite unorthodox in their implementation.
That’s one of the reasons why a game engine integrated into a DCC is not a good idea - it’s simply a wrong tool for the job. The more Blender evolved over the years, the more diverged it became from the GE, to a point where it just wasn’t feasible any longer to try and maintain the interoperation. And the other way around - that’s why game editors like those shipped with Unity, UE or CryEngine aren’t versatile DCCs.
In both cases, you have a scene graph that needs to prepare the data which is then sent to the GPU. The graphics is only used once at least some data is ready. From that point on OpenGL performed all operations sequentially, while Vulkan allows parallelization.
It literally makes no sense to me, why Blender wouldn’t be able to profit from that.
Because it’s far from being as simple as how you describe it All of Blender has to be operational, all the time, not just scene graph (which, again, is far more complex than what a game will have). The graphics is “used” all the time. You just selected an edge - now it has to render it in a different color. You just selected an edge with ctrl pressed and the option to tag seam - all of a sudden you need to change the mesh on the GPU, which may mean reallocating memory. You moved a vertex - oh, there was an object parented to it, which happens to be affecting some other object’s vertex weights. Then you hit ctrl+z, but the old mesh data is not in the VRAM anymore, now you need to reupload it…
Both GL and VK execute some commands sequentially and some - in parallel. VK gives you more control over execution, but demand, in returns, more diligence on your part. You tell the GPU exactly what to do and when, which sounds way simpler than it actually is. GL, on the other hand, attempts to take care of that complexity for you, which sometime results in unwanted (or unpredictable) performance losses (because it fails to resolve the order, or over-zealously synchronizes, or wants to recompile shaders or reupload textures). Neither is magical and doesn’t automatically guarantee or preclude performance gains. It’s not the API in the end that matters the most, it’s how you use it, and in no small part either - what for.
Like I said, it’s not that Blender will not profit from Vulkan, it’s just unreasonable to expect significant performance gains from the switch. When most of your frame time is spent just resolving what to change, where, and how (because there was no way of knowing that beforehand, unlike in a game), you’re still bound by these restrictions, and it doesn’t matter that you can potentially push more render commands in that same time period.
In a game, you don’t know beforehand what is going to change! That’s why I also don’t agree with your conclusion.
Games have to be real-time, that’s why the graphs are usually simpler and they are made to perform well. But the evaluation is still pretty much identical.
I don’t think you understand what I’m saying. Of course games are not psychic But most of the time, you do know what’s going to change. Because it’s dictated by game’s logic and its simulation setup. Player is coming near a portal - you start pre-fetching next level. Player hasn’t visited location X for a while - it’s a good idea to unload it from memory. Player picked up a gun - it’s a fair assumption they’re going to shoot, so all the effects related to that need to be present pronto. Games that want to have stable framerates, or those with network play, often “predict” the physics state on the next frame, or even a few frames ahead; which is also necessary for some effects. And in the end it’s no big trouble to end up wrong if it’s just for one frame.
Depsgraph evaluation is also far from identical - you already have the graph, you loaded it with the level, it’s structure is only going to change at known control points. You don’t calculate complex functions authored by an artist (who isn’t necessarily a great programmer), in a slow language, spanning several pages of code, whenever a single bone moves by 0.0001 units, because that’s how the artist rigged their model. In fact, you don’t usually even have rigs - your animation data is already pre-made, you replay it, not re-evaluate it. And when you do have rigs - they’re quite simple.
That’s how games are “optimized”: you know what to expect, your use cases are bounded, and you try and search the most efficient ways of realizing those use cases. You try to off-load as much work as you can to authoring process, to be able to have more time for simulation and rendering. And that is where an API like Vulkan can shine. When you’re working with scenes where literally anything can can be edited at any point in time (including rendering code paths), those benefits can and will become more modest.
I have done a bit of OpenGL programming and when I tried to learn Vulkan it was really a no-go for me. I am not a graphics programmer and I neither I do engine code, this was only in terms of learning purposes.
As for example OpenGL is relatively simple because you only need to manage the state of Buffer Objects (uploading your data into memory blocks in the GPU), in Vulkan you will have to setup the entire rendering pipeline, the swap chains etc, which is a humongous boilerplate of code. Now I can only imagine someone working in EEVEE taking all of that complexity of Vulkan and figure out how to make 50.000 lines of rendering code to work properly. Which simply does not worth the effort in terms of development time. Within 4-8 years hardware will have been advanced even more thus making up for that slowness of OpenGL.
For the record I look at the current state of benchmarks and the performance difference is miniscule, 10%-30% does not even count as difference, if we ain’t talking about 50%+ it doesn’t even count as a better solution in terms of efficiency.
Edit: Let me clarify. You can have flexible, changing assets both in game engines and in Blender. Why should game engines profit more than Blender?
Even though there may be more computations happening in Blender before data is sent to the GPU, it doesn’t mean that the rendering code can not profit from being implemented in Vulkan.
Your point was that “Blender, like many other 3d applications is working very much like game engine”, which it doesn’t. You simply can’t equate one to the other, for reasons already stated above, nor can you expect the same benefits from Vulkan between the two, for reasons already stated above. Please try to at least first understand exactly what those benefits are and how they can even apply to Blender (or not), on a technical level, but even more importantly - what are the drawbacks. “I heard on the grapevine that Vulkan is more efficient” is just hearsay.
In a game, your data is already on the GPU, so are your textures and shaders. You build (or better, load) your Vulkan pipelines, set up your render passes, and start queuing draw calls, occasionally sprinkled with transfer commands. Once in a blue moon you may have to stream in something from disk. But more or less the same happens with an OpenGL game as well. Vulkan’s power comes from minimal interference on the driver’s part, which requires careful setup of the pipelines and correct synchronization of commands by the programmer.
Blender doesn’t work like that, nor should it. It doesn’t even redraw your screen unless it needs to. It will perform loads more transfers between the VRAM and host memory. It will be rebuilding pipelines (expensive) on a regular basis. It will be using exotic and expensive render buffers. It will be doing a whole lot of things that a game never will, therefore it will not utilize Vulkan’s strengths as a game would.
Now please, please understand that this means exactly what it says: don’t expect great performance improvements from Vulkan in Blender. But in no way this means that Blender will not benefit from Vulkan at all.