Up to 400% more AMD Ryzen/TR CPU speed with MKL patch

Is Blender compiling affected too?

in short the math kernel library from intel is checking the vendor_ID.if its AMD or other than intel,then it uses only SSE as extented functions.it skips the ask of the extentions the CPU has completly.

https://www.reddit.com/r/matlab/comments/dxn38s/howto_force_matlab_to_use_a_fast_codepath_on_amd/

same topic on a german computer site

from Ned Flanders who found this problem :

The workaround is quite simply based on a debug mode of the MKL.

About this one can specify on which kind CPU it runs, which overwrites the Vendor String query. The MKL is now set in this mode as if it ran on an Intel CPU “Type 5” with AVX2 support.

AMD Ryzen and Threadripper CPUs support AVX2. Nothing beats, but the whole thing just gets faster because more efficient AVX2 code is used. These instruction set extensions are standardized.

edit,it seems compilers are affected too,see this interesting blog
https://www.agner.org/optimize/blog/read.php?i=49&v=t

Not at my machine right now so I can’t check but I never saw MKL in the Blender deps. So if memory serves well, this shouldn’t be a problem.

Just found an update on this topic and it looks like Numpy is also affected by this!

its not a problem with memory.if the mkl library uses only SSE extended instructions and not AVX2 with AMD CPUs, then the whole potential the CPU has is unused.

Sry, I was referring to my internal memory. As in ‘ability to remember correctly’.
But as @SteffenD stated Numpy could very well be a problem.

exactly.
Here another quote from computerbase today :

However, as long as the master function detects a non-Intel CPU, it almost always chooses the most basic (and slowest) function to use, regardless of what instruction sets the CPU claims to support. This has netted the system a nickname of “cripple AMD” routine since 2009. As of 2019, MKL, which remains the choice of many pre-compiled Mathematical applications on Windows (such as NumPy, SymPy, and MATLAB), still significantly underperforms on AMD CPUs with equivalent instruction sets.

Wikipedia

Tbh I am just dabbling into the intricacies of math in programming and can’t really make a picture of how this all relates.
What I got is that the problem is deeper than just Blender doesn’t use MKL as I stated beforehand.
Thanks for bringing up the issue. Eager to hear more educated opinions on this.

So I think they managed to slow down blender for AMD users…

Intel Open Image Denoise uses Intel® Math Kernel Library for Deep Neural Networks (MKL-DNN)
“Intel Open Image Denoise internally builds on top of Intel® Math Kernel Library for Deep Neural Networks (MKL-DNN)

And MKL-DNN uses MKL, which runs the slower code path for non-intel cpus.

That was sneaky / dirty of them, Blender is used for benchmarking chips all the time. Getting their Run on SSE on non-intel, but use AVX on intel code into blender is really not cool.

Edit: It might be off at the moment, but there looks like an option to switch the use of MKL - probably after OIDN is an expected feature in blender. https://github.com/intel/mkl-dnn/blob/e90caf18a9b89ef10645131ea5cd70b201df8041/cmake/options.cmake#L157