Nvidia now has us between a rock and a hard place

Cheap or not cheap, programmers want standards. We are use C/C++, because this is ISO standard. We are use OpenGL, because this is ISO standard. We must use OpenCL, because this is ISO standard. But Microsoft, Apple, Nvidia too much love play monopolism and they never use standards. And problems starts at this point.

If Nvidia uses standards, then we are talking about card performances, not API now.

I know, you already explained that you would teach developers a lesson by enabling competetive OpenCL rendering. Maybe also fix and maintain AMD drivers while you are at it, they could need a helping hand.

3 Likes

If you have any question, then ask.

Are you a graphic programmer?

Lottery numbers would be nice.

Uhhhh look around at the GPU enabled production path tracers on the market. They all offer their full feature set on CUDA only. People don’t refuse OpenCL for fun. It’s just garbage. There are things vital to full-featured Monte Carlo path tracing for film and movies that you simply cannot do with the language specs as they exist, and there appears to be no effort on their part to improve their standing.

I run development at a mid-sized software company. If I go to my boss and tell him a language or tool isn’t suited to a certain task, he says okay. There’s a reason Solid Angle, Chaos Theory, and many others require Nvidia GPUs for their renderers, and it’s not because they like pissing away potential customers for the hell of it.

7 Likes

It’s amazing how many people fall into the trap of thinking the minuscule market for GPU based 3D renderers which is currently dominated by CUDA is in someway representative of the wider GPU compute market.

OpenCL has an enormous market share and simply is not garbage or a hot mess otherwise it wouldn’t be used so widely across the scitech, medical research and financial industries oh and is the leading API in mobile applications. When I last looked at a few years ago OpenCL adoption was outstripping CUDA by something like 6:1 and was projected to overtake CUDA usage very quickly. If you wanted a GPU development job you had far more choice if you knew OpenCL.

What you’re actually seeing is how effective nVidia’s marketing is how it has captured the mind-share. nVidia’s marketing is much better than their GPUs.

1 Like

Lots of jumping the gun here. Lets all wait till the specs and prices are officially released.

Sources please?

Try Streamhpc and other GPU development sites. That’s where I found the information.

09-11
https://streamhpc.com/blog/2011-06-22/opencl-vs-cuda-misconceptions/

11-20:


1 Like

And what does intel plan to use for is Xe GPUs? CUDA ?:crazy_face:

I don’t care about 3, 4, 5 or 6 letter acronyms.
Tell me why I should give a fuck!
I care about performance, efficiency and flexibility. Value for the money I spent.
I live in the here and now - If I need a GPU RIGHT NOW, I should just buy the inferior product, that gives me disadvantages?
Why?
Even if Nvidia costs 200 bucks more, if its 20% faster or more efficient, the costs are amortised after a couple of jobs. Time equals money.
Not to mention that there are no problems with OpenCL on Nvidia, I am not blocked out of anything and sometimes OpenCL runs even better on Nvidia systems - so from my perspective there is literally NO incentive to buy AMD.
The only argument you guys have against Nvidia is based on ideology, and i don’t give a flying monkey shit about ideology. I am not gonna shoot myself in the foot to demonstrate an idealogical talking point.
Yes, capitalistic monopolies suck, Nvidia is a shitty, greedy company, yadda, yadda…
They still produce the better product that gives me advantages over the competition - so…

2 Likes

That source is from 2011… anything that’s no more than 1 year old?

In fact this is an art forum, isn’t it?

1 Like

If you are interested by rendering stuff with EEVEE, you absolutely don’t care about CUDA.
And now, with OIDN, you can render with Cycles in CPU mode without giving a shit to GPU rendering.

Cycles who has a GPU rendering initially designed on CUDA, is providing a way to do it with OpenCL.
Situation could be different. There could be none.
External renderers are allowed. You can render a Blender scene with LuxCoreRender or AMD Pro Render.

In fact, situation never was as good to do fast rendering on a large variety of hardware.

So, no, last powerful and costly NVIDIA RTX card is not an obligation to work with Blender.

And if you are only making 3D Printing from Blender’s sculpts, rigging characters, doing VR stuff, 2D animation. You can live without caring about rendering at all.

There will always be a large diversity of Blender users with different hardware set-ups because there are a large variety of Blender uses.

1 Like

But this is programming issue, not art. You talk about APIs, not art style of the any painting. This is like, a plumber talk about CUDA and OpenGL’s techical specifications. If you don’t know, then you must be slince, must not talk like football fanatics.

About 10 years ago, nvidia cards worked with big OpenCL kernel while AMD had problems handling big kernel and was never able to. Then after years of ATI users complaining, AMD got involved and designed better cards and helped Split the kernel to Blender developers. So ATI users were able to use Cycles with OpenCL, but always with various problems related to drivers. Now the same thing happens with Eevee, ATI users usually also have problems due to poor quality of OpenGL drivers.
Also you should take into account the facilities that nvidia offers to developers regarding easy API/SDK to be able to implement their products.
So, it is not the fault of nVidia or Blender developers.

I am using AMD card and I have not met any OpenGL problem. And I am also using Nvidia card on to my workplace.

Secondly, if one problem is based of driver, then Blender developer says, this is driver issue and they send a bug report to the AMD or Nvidia.

But problem of today is mostly not based driver, mostly based Blender programmers. Example, for 2.83, they have many many render bugs, especially for OpenCL, but 2.82 not have too much. Because they changed a lot of things of the render engines and they did not work on OpenCL as required. Today trying fix this bugs.

Good to know. But I remember that during Eevee development developers complained about AMD OpenGL implementation and more than once they had to implement hacks to workaround the problems. I also remember AMD users here having issues that were only reproducible on ATI cards.
So if you say so, good to know that it works fine now.

1 Like

If you use Nvidia exclusive commands, exactly met problems. If you write portable program, then you must not use exclusive commands.

Example, when I write Direct3D program, my FPS drops suddenly. Then I search why this happen and found. One Direct3D function decreased performance. I wrote my own function and my FPS increase 10x+. Programming is like this. You always meet problems and you always produce solutions for this problems.