GPU Compute (On OpenCL) being considerably slower than CPU!

I have a RX 480 AMD video card and a Ryzen 2700x CPU, Cylces renders are more than twice as fast on the cpu (30s vs 1m20s + on the gpu for a simple scene !) is this a normal behavior due to the OpenCL integration not being optimised on blender or am i experiencing some driver issue ? The graphics card’s usage seems normal during render but the clock stays strangely low ?

I am laso getting kernel loads and “updating device” randomly when i rerender scenes which i had already loaded the GC kernel for without modifying anything in the said scene …
To be clear I am not rendering complex scenes or simulations that would be faster on the cpu. This is a benchmark, just simple geometry and a principled shader

I am using 2.8 now but i had the same problem on 2.79 I always used CPU rendering because i havemore horsepower on my other system anyways but i would love to know if it could be possible to get a bit of a performance boost on my home machine.

1 Like

I don’t have an AMD card with a performance worth mentioning, so I cannot tell for sure. In any case, yes, OpenCL render is not particularly fast (compared to CUDA). Your Ryzen is a quite excellent CPU for rendering and an RX 480 is not that particularly impressive performance-wise. So I’d say, it is quite possible that this is the normal behavior.

Hope i can still chip in.

The main issue is the kernel complile that ocurs on AMD cards.

One test, render the scene twice, and see if you get “speed up” I ususally see a noticable difference on small scneses during second render compared with first as during second rendering (be it second frame or just re-render) no longer requires kernel .

What type of scene did you use to get that results?

What OS version do you have? What driver version do you have?

Note that with 17.7 driver on Windows or earlier (widnows by defult installs the old driver) there was a bug where GPU didn’t go full speed. I had that issue to many times (every time I reinstalled the OS or sometimes the driver during an “upgrade” actually downgraded…)

Still with newer drivers that is no longer na issue

Hello, thank you for answering,
Indeed the kernel compiles are the main problem here. throughout the render of an animation they just seem to happen at random ! I also was using a “not quite up to date” driver version at the time of posting this which did make my GPU run at around 30% of its capacity during render, now with the latest AMD drivers i seem to have indeed gained a bit of speed over cpu rendering !
I ran a small test by bumping samples way up on a simple scene with little geometry and simple materials and i get the following:

CPU render time: 00:17.11
GPU with kernel compile: 00:26.16
GPU without k compile: 00:13.67

Which is about what i would expect out of this GPU.
However the kernel compile does happen more than once if i am rendering an animation and there is about the same gain in render time between CPU --> GPU that there is between GPU with kernel compile and CPU.
So depending on how often the kernel has to be recompiled the time required to do so would catch up with just rendering with the cpu !

Yes, that is a lot of time for kernel compilation. OTOH you say it is an animation, and it might be that the devs considered recompiling kernels between frames acceptable. I mean, with the times you report, you have subsecond render times per frame, no? I assume typical is way more, which would make the compilation time neglectable.

What I forgot to mention, GPUs hate progressive refinement (just in case you didn’t know, and have it turned on).

From my understanding GPU recompile shoudln’t occur unless somethign int he scene changed, like adding a new shader that wasn’t there. Usualy when I render I do not see a recompile even when rendering 500 frames…

you can raise a bug report if you can repeat the issue. (or even share the source file)

Side question, if you can share, I can try to render the anmation and see if I also get recompile every few frames.