Cycles - OpenCL

Can’t find a good thread on this, as most are revolving around GTX 10 series and CUDA in general.

I have an AMD FX 8300, an R9 290x and GTX 680 in my system. GTX 680 was used primarily for rendering till AMD guys helped to split the kernel to give us good OpenCL. Now it is still there for rare times for extra rendering.

So a few questions, and maybe ignite some more discussions on OpenCL.

Intel and AMD CPu’s are supposed to support OpenCL, and even Nvïdia supports it. But when I use the extra flags to enable all OpenCL devices, I can not select all 3 of my devices (FX, R9 290, GTX 680) to render one scene. Any plans to implement OpenCL support for all devices?

I know Nvidia suffers when using OpenCL as they are not interested in supporting it so they are putting development resources to get it running smoothly (which is a bummer). But having the option to boost rendering even a bit would be helpful.

Side question, on my R9 290x, the shadows look like many “horizontal lines”. Ran Furmark and other GPU stresstest and no issues. Did anyone else experience this with the latest 16.5/16.6 drivers? Or maybe just my setup is messed up.

Any feedback would be greatly appreciated

And I hope that OpenCL development will be improved in the future. I’d even fund the developer fund (www.blender.fund) as long as OpenCL support gets more loving.

Thanks
Greg

Gave LuxRender and Indigo4 (beta) a test?
Both can use OpenCL.
Both can use all devices, tho with Indigo4, for now as it’s still in beta, developers advise use of one device only.
Both have support/exporter for Blender.

Not really the solution I was looking for. I like Cycles and blender, so i’d rather support it then other engines.

But I am luckily not blind and LuxRender is very very tempting…

Still Would so like to hear feedback from dev’s on this, what are the plans for OpenCL and multidevice support like CPU and GPU rendering in one go

Last I heard, it was fairly low on the priority list. I think they are trying to incorporate more openGL integration for physics sims and such first. After that, I think they would try to rebuild openCL support to use version 2.0, which Nvidia still doesn’t support. Now if AMD tries to target Blender users with their new Polaris GPUs, that might change the story.

However, I don’t think you would get as much performance from using all 3 devices as might expect. All GPUs prefer different tile sizes, and if one is far behind (like the FX), that could actually make the process take longer to finish, with the other two GPUs sitting idle.

Hi my Intel i5 is working with Cycles but I got different results for CPU and GPU.
For speed of i5 I use Luxrender to check.
From 1.5 Ms/s about 300 Ks/s from CPU and you GPU´s are faster than mine and the AMD CPU are not really fast.
You need a special driver for Intel OpenCL to get CPU to work.
Easiest way to check is with Luxmark the Luxrender benchmark.

http://www.luxrender.net/wiki/LuxMark#LuxMark_v3.0

Cheers, mib

@ Dreaming381

I have to agree with what you wrote, at least for the most part. Though CPU and GPU rendering might indeed be problematic due to tiles, at least AMD and Nvidia GPU’s rendering together should bring tangible results.

But as you said, OpenCL wasn’t that much of an Priority due to AMD’s complier having to many issues. It would have to be a larger partnership between Blender Institute and someone from AMD, even more so then the few AMD employees who worked to split the kernel so it compiles. Though that was a major step forwards.

With regards to OpenGL, that might along with PRB could bring an interesting approach to “GPU” rendering. In essence it is basically a “game engine” at that point, so rendering in real time. Or maybe even the initial pass before Cycles kicks in… So many possibilities.

@mib2berlin

I agree about the CPU not having as much power compared to even my dated GPU. Unless we are talking of 22 core xeons but not many of us will have such systems. For me the CPU would be helpful in doing some of the passes that the GPU can’t, and again even a small boost would be helpful.

As I can see that there isn’t much of interest in OpenCL discussion, I’ll let this topic die off due to natural causes, though I’ll definitely revive it when something moves in this area.

Hi.
Regarding using the two cards together with OpenCL. In my tests OpenCL kernel split is not working well with nVidia, and AMD does not work with the big old kernel, so I think you can not use both combined.
Also, if you could use them together would have a performance problem due to the large difference of optimum tiles size using each card. AMD needs big tile (one full tile) and that nvidia card around 256x256.

Edit:
For AMD you try buildbot versions in case some errors has been fixed:
https://builder.blender.org/download/

If you work with animations, to harness the power of the two cards you could work with two opened instances of Blender. One of them working with a group of frames and AMD OpenCL, and the other instance of Blender with the other group of frames and nVidia CUDA. This could be useful whenever your scene includes only features supported by AMD OpenCL.

Thanks YAFU for the feedback.

With regards to big tiles on AMD cards, to be honest I only noticed that recently. But I wouldn’t mind sacrificing a bit of render to get the assistance of the GTX one.

Still with regards to the Nvidia card, is the process for enabling OpenCL render same as it was for AMD? some string variable to enable it?

Thanks for the link to the builder. Totally forgot to test newest builds. I’m running the one from Steam hihi. Nice to have a counter of how many hours I’ve spend blending :slight_smile:

As for splitting the render to two instances. Issues are that rendering OpenCL and CUDA, even without HDRI lighting, shows slight differences in the image, so it is still not identical and will be visible in animations (ever so slightly) Hence I wanted to try OpenCL on Cuda to see how that works out.

You can enable it for Nvidia with:

export CYCLES_OPENCL_SPLIT_KERNEL_TEST=1

and Windows:

SET CYCLES_OPENCL_SPLIT_KERNEL_TEST=1

in a shell and start Blender from there.

Cheers, mib
EDIT: Work fine for me but slower as Cuda on my two GTX.

Thanks for the information, mib.
In my GTX 960 I can not even render the default cube (352 driver) with OpenCL split kernel, Blender hangs in the first tile (works fine in 2.76b and big kernel). In a recent report I read that apparently no interest in giving good support for OpenCL in nvidia:
https://developer.blender.org/T48076

Hm, I have driver 367.xx but I would not change yours if all work well for you.
Comment on tracker is nonsense if you have a mixed system with AMD and Nvidia it should work at least on AMD level.
Btw. HDR is implemented since 2 weeks or so.

Cheers, mib

Yes, I think so. Furthermore starting from 9xx series nvidia OpenCL performance improved a lot compared to previous series (although still slower than CUDA). So for me it is important to continue to care the good support for Blender nvidia OpenCL in case you have AMD and nVidia cards.

Later I’ll try with a newer driver version in another installation of Linux.

As for splitting the render to two instances. Issues are that rendering OpenCL and CUDA, even without HDRI lighting, shows slight differences in the image, so it is still not identical and will be visible in animations (ever so slightly)

That doesn’t sound right. Either you are using a CUDA exclusive feature in your scene, or you discovered a bug in OpenCL (or maybe CUDA?), or your settings switched between rendering the two, or maybe a driver issue. I’m guessing it is an OpenCL bug, because they fixed some bugs in CUDA just before the added features that probably didn’t transfer to OpenCL. Do either images match a CPU render?

@Dreaming381 I’ll try to verify that with one of my scenes, same settings, same distribution pattern and compare them… Will get back to ensure I’m not totally off.

As for CPU, CUDA looks about the same.

And recently as part of my initial post, I see that shadows are rendered badly. They look like horizontal lines rather then being smooth. Will render and post tonight.

hope my GPU is not damaged. But testing it in Furmark/3dmark and others, all shows good. So again unsure. Using 16.6.? drivers (will confirm exact driver when I get home).

Still since I’m the only one reporting this, it has to be specific to my computer. I’ll test it on another OpenCL AMD GPU as soon as I get my hands on the RX 480.

:slight_smile: thanks mib2berlin. dam I like this community :slight_smile:

The issue with rendering shadows being strait lines was fixed by going back to AMD’s Catalyst 16.2.1 for the default Blender 2.77 found on Steam, or any other version I tested so far. Will continue to test to see if it was purely a AMD’s driver issue and get back.

With regards to pixel accuracy, I’ve tested a small render of a T-Rex I’m working on and all renders were pixel identical as far as I could see (CPU - CUDA -OpenCL)

Rendering times using Windows 10 with blender 2.77 (found on Steam) Unsure if it is 2.77b.

FX3850 - CPU : 1m15s 16x16
FX3850 - OpenCL : … after 3 minutes I gave up, it wasn’t even 1/2 way. CPU loading was also lower then default CPU
GTX 680 - CUDA : 0m15s 256x256
GTX 680 - OpenCL : … after 30 minutes waiting for kernel to compile, I gave up
R9 290x - OpenCL : 0m14s 256x256
R9 290x - OpenCL : 0m10s 1920x1080 (full frame)

So not too bad.

I’ll get some more testing with regards to Blender Versions vs. AMD drivers with regards to quality and rendering times. Will also use default BMWx2 render so it is easier to compare on something more complex then just a small t-rex

I’m wondering, is there a way to “extract” OpenCL component of the AMD driver from 16.2.2 and replace it in the latest so that OpenCL would still function correctly?

Using yesterday’s blender build from https://builder.blender.org/download/ with latest AMD drivers, on my R9 290x under windows 10, the rendering errors were gone.

Will do more testing, but it is clear that when running 2.77a, I get errors, and with latest build I got No errors. This is for split kernel.

Can anyone else confirm if they get this? Based on Rx 480 thread, the rx 480 still experienced the error. so I’m eager to see if older AMD GPU’s work.

Latest driver still does not render as it should with AMD 7850.

Nvidia will support Cycles SPEC standard. I hope there will be future OpenCl implementation for Cycles so that AMD cards could also have similar Cycles rendering speeds.

Lines are fixed on current Buildbot Builds :slight_smile:

Stuff seems to work now with 16.7.2 also under Windows + Trans Shadows + HDR Support :slight_smile:

Cheers!