In this video you will see how to use CUDA cores for your AMD GPU (Graphics Cards Units) in Blender 4.0 by using Cycles render engine with CUDA technology developed by Vosen.
You can easily test and apply to different software like Blender ZLUDA Core that is CUDA core for AMD Graphics Cards:
You just need CMD and digit your commands:
you need to unzip the zluda’s folder in a renamed folder (zluda)
you need to run zluda from your directory with cd (windows only)
you need to digit zluda plus exe and then the directory with software extension generally with exe single file
If someone would like to test Alpha version of ZLUDA, there are different releases from 3.1 to 3.5 from lshqqytiger, another developer from Vosen’s team.
I suggest using ZLUDA 3.5 version because fix some problems with modules, in particular for Windows, if you have Linux some of this version doesn’t work.
Some improvements, but it crashes one time, so I reset cache of benchmark
Tested in Blender too, pretty similar result as rendering time, probably the loading of all components and elements of render seems slightly better
Potentially some missing module, considering that it’s under testing, in my case I find useful GPUOpen-LibrariesAndSDKs that fix hiprtc module not available.
If you notice problems in dll, check this GPU library website for AMD:
Missing file in my case hiprtc, added with GPUOpen-LibrariesAndSDKs:
Since it’s all open source now I hope Blender hires Vosen and has him working on ZLUDA full time. It’s amazing how much better it is than HIP. With his genius he could probably help with so may other Blender tools and optimizations too.
I tested only with HIP, because it isn’t supported in my case, if you try with HIP RT it should improve. As RAM usage ZLUDA usage is higher, but depends of the scene that you render, in this case it’s 100MB (0,1 more) in some part of the render, but in this case is similar. I notice more RAM usage in high dense scene like world or big scenario.
I see. Last time I tried HIP RT, the ram consumption was drastically higher than regular HIP. I would say, to the point that on my regular projects, the feature became practically unusable. The speed improvement was certainly not worth the cost. Not sure if this is still being worked on for “proper release” or if that is how it is supposed to be.
I check online seems there are some problems also with artifacts when rendering on blender project, around 2/3 months ago with HIP RT, probably it needs to be improved with better implementation with HIP development
Anyone can test 6950XT HIP vs ZLUDA performance? With and without RT if possible.
Also @Gioxyer Ryzen 5 5500U does not have Vega II (second gen Vega) gpu it has Vega 7 which is still plain Vega (first gen). Adding this so people will not get confused.
Hi, I would like to know which part is not very clear, you mean the part marked vega vii?
For the first question, on youtube there are some test comparison with 7900 XTX and 7900 XT with ZLUDA and HIP-RT, you can see some issue at the moment.
Radeon VII (also known as Vega II, or second generation Vega) is a desktop GPU not a mobile/laptop one. It’s basically a modified server/compute Radeon Instinct MI50.
Up to my knowledge, reverse-engineering is possible in Europe if you need it to achieve inter-interoperability and that the maker of the software or the hardware refuse to do it.
I wonder how long it will be until some antitrust (like EU) will slap Nvidia so hard they wished it was only allowing translation layers.