A good news for AMD/ATI Graphic cards owners

By upgrading the drivers to a leaked beta version of what might be AMD’s next official drivers, others and myself have managed to successfully render with OpenCL acceleration, however that is in Blender 2.6.2 official release.

In Blender 2.6.3, it renders only few shaders and the rendering time is bit longer.

But here are the steps you need to take in order to get it work and test it yourself.

Get the leaked drivers here: http://www.ngohq.com/home.php?page=Files&go=giveme&dwn_id=1624
Choose and get the official Blender 2.62 according to your own OS here : http://download.blender.org/release/Blender2.62/

Install them, and search for the file kernel_types.h which should be in the folder C:/…/Blender Foundation/2.62/bin/scripts/addons/cycles/kernel/

Open that file and uncomment that piece of code :

//#define __KERNEL_SHADING__

Save it and you’re done. Launch Blender and try !:stuck_out_tongue:

By the way, Linux users may need to wait till the end of the month when catalyst 12.5 will be officially released.

I would be happy to hear someone getting it correctly working with Blender 2.63

I’ve followed the other thread recently, and i wasn’t sure of the steps you’d have to take to have a go. Thanks !

But how “dangerous” is it to play with this unofficial catalyst thing ?
I mean I have the feeling that GC drivers are so heavy, hard to remove, etc… For example my Catalyst Control Centre never works. Impossible to install.

Can i just install the 12.5b on top of what i have ?

But how “dangerous” is it to play with this unofficial catalyst thing ?
At your own risk, but I haven’t got any problem.

For example my Catalyst Control Centre never works. Impossible to install.]
Well it was my case too with 12.4, until I upgraded. CCC is back.

Can i just install the 12.5b on top of what i have ?
I did that, but you may use atiman to fully remove old drivers. To be honest , I’m not quite sure if the installation manager removes them automatically, but I suppose it does.

OK, you don’t seem to be panicked by that: that’s quite relaxing !
Thanks for the “pioneering” work !

EDIT: installing right now… MAn, i don’t like that. It’s just like upgrading a bios, it makes me nervous…

So… 12.5beta installed.
2.62 downloaded, file uncommented, etc.

But: i don’t have the choice of processing unit in the user preferences, be it on 2.63 or 2.62.
I had CPU or “Barts” before (which i think is the name of my HD6850 chip).

Driver IS installed (CCC still doesn’t work.)
However i unticked that AMD APP SDK runtime thing. Could it be it ?

EDIT: of course it is. Installed that AMD APP SDK runtime and i get that choice back.
I’am rendering … a default cube ! Still compiling OpenCL Kernel…
So far my 7 GB of RAM have disappeared ! Ouch…

However i unticked that AMD APP SDK runtime thing. Could it be it ?
It could be possible. I installed, the latest AMD APP SDK, with a patch for OpenCL 1.2 support. It’s on AMD websites.
http://developer.amd.com/sdks/AMDAPPSDK/downloads/Pages/default.aspx

You need AMD APP SDK to use OpenCL!

Not required and i don’t recommend it. It will produce some erratic results. Just install the SDK runtime that comes with the catalyst 12.5 beta (you should choose express installation) and be done with it.

Yes, I’ve come to that conclusion inbetween. Boy compiling that kernel is some serious stuff ! It takes ages and burns my RAM off.
I’ve closed Skype, the gadgets, etc to gain a few 100’s of MB, but 7GB is my absolute limit. Is is reasonnable to carry on ?

EDIT: Kernel compilation done in 742s (12min+)
Now waiting for 1st openCL samples from my GPU ever: how moving…

Something that I’ve noticed is that compiling is done on 1 core only. That leaves 3 (in my case) to Firefox so that I can watch NBA top ten while compiling. Nice touch !

EDIT2: Ohhhh… It worked !
After 24 minutes total, I got 10 samples of default cube ! I’am sorry i didn’t take any picture.
More seriously, the 10 samples were insanely fast of course. I’am now trying Mike’s BMW.
Bad news is it seems to re-compile… So 12 more minutes of waiting.
Another top 10 !

Not really encouraging, is it?

Well, I love basketball, so 12 minutes of it are OK !

Soooo, in the meantime, My render is finished.

Here is how things happen:
1st time you render anything, you get the kernel compile stuff (742 s here) The console tells you it is compiling. Then you have to wait the same (though the console doesn’t say “compiling kernel”) once more, and then the render starts.
2nd time, you don’t get the console message, but you have to wait same amount again before render starts. So you start from step 2.

I’ve spotted the exact time when that bmw render started on my system :14 minutes 30 secs.
It finished at 19 minutes 05. That’s about 4 minutes and a half. For a HD6850.

Better than my Q6600 on official build (10 minutes 30~) but not THAT much better than on the recent mingw64 builds (about 6 minutes).

Here is the image:

Thanks For Your Valuable Information

I don’t think your card is performing as it should. 4:30 minutes seems excessive for GPU rendering. It is supposed to render that scene in about the same speed as a Geforce GTX560 Ti (1:30 minutes if I remember correctly).

http://media.bestofmicro.com/T/D/326353/original/luxmark.png

Well… If you imply that it was not a GPU render, I can promise it was. CPU usage was close to zero when the actual rendering took place (a couple of % here and there, but Blender 0). So it was purely GPU.

I was disappointed too. And then i saw (in the other thread) that stargeizer had about 7 minutes with his HD5770. So I think that’s coherent. It is like in your graph.

Vry early days still, so I guess there is much room for improvement on the OpenCL side!
Remember that the 560Ti uses CUDA, not Open CL. It’s like comparing apples and pears…

This is cool, I could really use a render speed up since my cpu is junk :slight_smile:
but I can’t get it working for some reason. I don’t get gpu as a rendering option.

This is what I’ve done to the file.

#ifdef __KERNEL_OPENCL__
define __KERNEL_SHADING__
//#define __KERNEL_ADV_SHADING__
#endif

Is the second line the only one that needed to be changed? or the third also?

@ Billt Joe: leave the “#” before “define”. Only strip the “//”.

I don’t think your card is performing as it should. 4:30 minutes seems excessive for GPU rendering.

Don’t think so.

Maybe i’m wrong here, but the Cuda compilled kernel code for cycles sizes at most 500 Kb. The OpenCL compilled kernel Cycles for my AMD card sizes around 18000 Kb. There’s a diference here and this is noticeably when the card runs it.

I believe the compiller used by ATI (now part of AMD) is still too inmature for big kernels like cyles, and more optimized for small kernels.

Let’s just hope that AMD can do better optimizig their drivers/compillers.

oh ok. From previous knowledge I though that “#” meant a comment. However, it still does not work. Are integrated gpu’s supported?

@ Billt Joe: “#” can be a comment but not in C/C++ I guess.
As for integrated GPU’s, do you mean the ones on laptops? or intel’s integrated graphics ?

I imagine a gamer’s laptop with a true nVidia or AMD gamer card would work.
But the minimal integrated graphic chips would lack the power anyway (and may even lack OpenCL support ?)

yes, for a laptop. It is a Radeon HD 3200.

Oh, no. It is much too weak.
Look here:http://www.videocardbenchmark.net/gpu.php?gpu=Radeon+HD+3200

Compare it to my modest 6850 (top chart)