I think everyone agrees that Blender can do with some more optimizations that’ll make things faster, but I’m not sure if your argument is 100% the answer.
When processing data, you have a few knobs to tweak to make things faster.
- Process things faster (for example, faster CPU, or use a language that is “down to the metal” like C, C++, or Rust)
- Process more things at once (Multi-threading, SIMD, etc)
- Caching/storing more things in memory (for example, baking animation to geometry deformation in memory to skip calculating the rig)
- Process less things (reduce the size of the data structures)
- Process things more efficiently (handle the data in ways that makes the best use of the computer’s hardware, only calculate when needed, use data structures and algorithms best suited for certain tasks, etc)
Blender’s optimization issues could be related to any one of those things. Except the first option. The majority of Blender’s code is in C/C++, so it is likely that the processing won’t be faster. However, things like making more use of the system’s memory or multi-threading may not make sense for a given task. For example, let’s talk about the depsgraph. Basically it controls what drives what, and what needs to be updated when (for example, moving an object along its x axis will update where the object is or manipulating the foot controller in an IK rig will drive a bunch of things that will probably drive a bunch of other things and so on). While the depsgraph is multi-threaded, there is a limit to the number of threads that will actively be calculating something based on what’s evaluating in the graph. For example, if you have a machine with 32 cores and you’re only updating one object, then likely you’re only ever going to use 1 core. However something like a path tracer will likely always benefit from more cores because each ray traced doesn’t have any dependencies. Also, some things may be extremely efficient, and could potentially become less efficient the more threads it uses. Lastly, there’s some data structures that may be really efficient for one thing, but not very efficient for another. For example, Blender’s mesh structure (from my understanding) looks great for editing meshes, but it may not be as efficient for doing something like iterating over every single vertex (due to the heavy use of pointers/list structures vs arrays). So, there may be need for another mesh data model for other tasks, and that also adds some performance and maintenance considerations.
So, while I think the argument of Blender could do with some optimizations is (and possibly will always be) something that needs to be done, it’s likely going to be a slow process because the developers need to spend time to figure out which knob to turn and how that’ll affect the rest of the code base. Hopefully this gives you a bit of clarity on optimization issues.