Why doesnt blender support amd?

its kind of annoying how it only supports nvidia and i have amd. i have to use my slow cpu render.

well theres a lot of threads that say some different things
would be cool if we can get 1 final answer in here

I don not think that there is a single GPU render (there are not a lot to begin with) that supports ATI. Its not blender’s fault. Just a thing you will have to live with. You should have just gotten a Nvidia card, assuming you had the option. I don’t think ATI has the same target group as Nvidia. By the way a lot of other programs that have hardware acceleration only support Nvidia as well, ex. Adobe Premier. Also do not expect in any time soon, maybe even at all.

The ATI compiler is unable to compile Cycles code. It crashes.

a nice detailed final answer XD

In terms of GPU computation NVIDIA simply provided the better solution.

OpenCL is not bad, AMD works well with that but that is not CUDA.

Stuff like that happens also when basically while we all talk about free market and competition we end up with duopoles.
Intel AMD on the CPU, NVIDA ATI on the GPU side.

Also it seems those involved in openCL simply dont get their act together as well.

Whats logical for use consumers is not often logical for the marketing departments of companies.

The main reason why the compiler crashes and has a hard time with cycles code, is that the AMD OpenCL compiler, doesn’t truly support function calls, so instead of doing true function calls, it copies the entire function code over to that specific spot.

Now, cycles is built up of heaps of functions everywhere… its basic programming to use functions & classes… changing it to work with AMD will be very messy and full of hacks… something which cycles is specifically trying to avoid… As BI is full of hacks / workarounds, its very hard to work on that renderer… its one of the reasons WHY cycles was born.

It has nothing to do with the programming that brecht has done, opencl was working fine on intel cpus / nvidia cards… but there was no real reason to keep it in for the mean time until AMD get their fix out…

AMD developers DO know about the problem, and they are working on it… just dont hold your breath for sometime soon, they have been working on it for a couple of years now.

I wouldn’t speculate on why exactly their compiler fails. Their hardware does support function calls and yet excessive inlining (ie. copying the entire function around) is the default behaviour for any GPU compiler, be it AMD or NVIDIA.

I also wouldn’t say it has nothing to do with the programming of Brecht. If you’re writing a program for the GPU like this, you are absolutely pushing the limits of what’s going to fly. It would be a significant effort to restructure the code to be more GPU-friendly, but it still wouldn’t be a guarantee that it works in the end (or that it works fast enough).

There are a bunch of GPU-based renderers that also work on AMD, but they’re not necessarily comparable to Cycles. The majority is based on CUDA.

Would Blender OpenCL support not unlock compute from Intel GPUs included in modern Intel CPUs, as well as the CPU itself? Prioritizing development effort is understandable. However, if Blender could OpenCL render across Intel CPU+GPU AND NVIDIA GPU it seems like it would benefit more than just AMD GPU owners.

FutureHack

openCL itself is an interesting idea and it reads like a promising alternative incase well those who work on it would be more focused.

If you are interested in marketing and ethnographic research it will become quite clear why AMD and NVIDIA do what they do

It is like with Verizon and ATT. Here in the US they claim offering the best service, while like the cable companies overcharging you for an inferior product. In European telco companies have much more competition (lol because of governmental pressure) and thus can you get there the same service for around 1/4 of the price like you would pay here.

Sadly NVIDiA as AMD work globally and simply dominate the market for specifically for gaming which is the big business for them.
That’s where the sales numbers come from. So why innovate when the market is split up and well working.

Remember when NVIDIA was even as arrogant and sellin nearly rebranded old tech as a new product?

That is why I think openCL could be very useful because I would use my CPU and GPU together, but will never take off and CUDA remaining the only logical solution for GPU powered engines.

At this point they also created that much market dominance that it will be hard as a new comer to gain any ground.

I would love to benchmark my high performing Blender/CUDA single GPU setup against even a non-optimized “early release” Blender/OpenCL setup but with the three available OpenCL compute devices in my system. I picture all the dormant i7/i5 integrated GPUs out there unavailable to Blender users because OpenCL support was yanked and it seems a shame. Sorry to the original starter of this thread to take this off topic - For you, my advice would be to investigate SmallLux GPU which appears to have good AMD OpenCL support.

Intel IGPs are almost useless for compute tasks. You can try them in Luxmark, afair they deliver less than 10% of the CPU performance. Together with the multi-device overhead, there isn’t any benefit to using it at all.
Cycles does (or at least did) work on CPU OpenCL drivers and it’s even a tad faster than regular CPU mode in gcc-based builds (at least in the tests I did). To enable it, you have to edit the code yourself, however. It might be worthwhile to support that, considering the inferior CPU performance in the windows official builds (which have to use MSVC instead of mingw/gcc).

It’s not really comparable.

There would no doubt be multi-compute device overhead and Intel’s GPU performance is miles behind gamer cards. Of course you are correct…I’m just thinking of Blender users without good cards.
I stumbled across this OpenCL benchmark site: http://clbenchmark.com
Some interesting figures from poking around:
CLBenchmark “Raytrace” Points
114,057 - NVIDIA GTX 590
38,630 - Intel HD4000 GPU
26,153 - Intel Core i7-3820
64,783 - Intel CPU+GPU Total
57% - Intel Total of GTX 590
Naturally CLBenchmark “Raytrace” doesn’t equal good Cycles rendering. One interesting figure from the site is that GTX 590 scores lower than a GTX 670 which comes in at 151,366 points.

I’m not convinced of the veracity of these numbers or their relevance in regards to Cycles performance. The Luxmark results paint a quite different picture (although I’m not sure about their reliability, either). The HD4000 at least may not be quite as bad as I thought.

The GTX590 seems to score so low because the available data refers to single GPU mode.

I was surprised to see that the HD4000 GPU scores higher with OpenCL than the coupled CPU but I guess that makes sense re: GPU vs CPU performance generally. Intel “Haswell” CPUs releasing this summer include a HD4X00 that increase performance even farther. Anyhow, hopefully developers will consider liberating the often forgotten GPUs irrespective of when AMD gets their act together.

As far as I was able to read between the lines 3D chip is also not 3D chip.

The 500 models work faster because of the hardware CUDA can use in contrast to the 600 models.

A 3D card can be optimized for openGL / directX math but thats not what we use with CUDA, so a fast gamer card can be quite slow for science math.

And I think all Intel is trying to do here with the IGP is providing just the tools you need to run games and play movies. They also need to consider heat production etc. thus I have the feeling those chips will be more for the media consumer market.

Moved from “General Forums > Blender and CG Discussions” to “Support > Technical Support”

Not entirely true in regards to Premiere: http://www.engadget.com/2013/04/06/adobe-premiere-pro-windows-opencl-support/

And just like magic, the new Intel 5200 GPUs are announced:

If you believe the marketing slides, 3 x performance over existing Intel GPUs potentially puts OpenCL performance (~115,890 CLBenchmark) in the range of decent NVIDIA 6xx/5xx cards. I’m guessting Intel knows how to write an OpenCL SDK compiler.