Nvidia now has us between a rock and a hard place

Nvidia stuck on an inferior node this next generation( all 8nm Samsung node for every card even TITANS). They are consumer a lot of power by all accounts, the top tier gamer card will still have the same 12 GB Vram, and if you tried to overclock that card you are talking well beyond 400watts and a room heater. And the techies are predicting that AMD NAVI will roflstomp Nvidia this generation… However we all know here that for Blender use Nvidia CUDA cards are an absolute must.

AMD Navi will be on 7nm EUV, cheaper pricing and more power efficient. But we can’t enjoy that in Blender and it wont have RT cores…

Sorry if this is a bit of a ramble im just paraphrasing tech news im listening to.

I sense the prices wont be good going forward if there is one single company with a monopoly on the productivity/professional market ie Nvidia with CUDA and RT/Tensor cores.

Nvidia will up the price in order to make more profit margins from us lot, so they can keep up with their R&D spending and other outgoings.

Blender team must change programming style. They are only turn their face to one area (Nvidia) and they are pass other area (OpenCL). OpenCL is international standard, CUDA is monopol API. Blender must works same amount for each. AMD cards not bad, Blender’s codes bad for OpenCL. They are not working enough on OpenCL.


It’s a trap and Blender have fallen into it. Nvidia was too good for too long and we all forgot about OpenCL… Yes Nvidia will now exploit their monopoly on the professional market and anything that relies on CUDA or RT cores. The gamers will switch to AMD this time around.

1 Like

And ironical side of this situation, Blender is Open Source, OpenCL is open source, Cuda is closed source, but Blender not works with OpenCL (enough) and works with Cuda. Where is Open Source Policy? They are make orphan this Open Source technology.


Pretty much the whole rendering industry was frilled when OpenCL was announced and than sobered when AMD drivers came into play. It has been 10 years since then. OPENCL structure is pretty much a patchwork and AMD drivers are the main culprit.
There is also a super simple solution to your problem regarding GPU limitations: Buy a Threadripper/Epyc system.


OpenCL not equal AMD. OpenCL is like OpenGL, open source, international standard.

If AMD drivers are problematic, this is AMD’s problem, not OpenCL. And Nvidia drivers also not good. I used Nvidia cards for several years.

If Blender team works on CUDA 100 hours, then they must works on OpenCL 100 hours. We only want this, not anymore.

AMD is the only relevant reason to optimize a renderer for OPENCL. If their OPENCL drivers suck it makes the whole endeavour pointless. And this is why every relevant renderer runs better on nvidia than on AMD. This whole AMD OPENCL debacle has been going on since like 2011 and AMD has made no real strides since than to make their cards a viable option for anything that is not mainstream gaming/bitcoin mining.
I find this discussion rather pointless to be honest as you can just easily buy beefy CPUs these days in consumer land and not worry about GPUs at all.


Because they are not experienced with OpenCL. If you create architect over CUDA, and after that you think, maybe one day add OpenCL support, exactly this render engine works with CUDA better than with OpenCL. I am programmer, and for programming society, not be bad computer, always be bad coding.

Than please share your knowledge with the whole industry but start with Cycles please :wink:
Devs are already eagerly waiting …


Then good monopol, overpriced days waiting for us.

Everyone must use Nvidia. Heil Nvidia!

It’s easy to spot people who aren’t developers. OpenCL is hot garbage compared to CUDA. That’s why CUDA rules GPU compute coding. They made an excellent API that is easy to use and does what you expect it to. No one can claim the same for OpenCL.


Then we never use Blender Python script. Or Java, or C#, etc. etc. Because too bad and too restrictive. These are not realistic excuses. I never like DirectX, too bad for me, but I use this, why, because I must use this. This is professional area, not a child park. Programming is art of the problem solving, not art of the crying when meet problems.

Oh come on! Are you really serious? Not trolling? Sure-sure?


Yes, I serious. If I say like your says in any company, they are immediately fire me. If I work on any job, then they expect work from me, not excuse. Welcome the real life!

Usual programmer rant, that want to use what he likes and doesn’t seem to notice that Nvidia cards are better right because they’ve a better programmed api…
Pay those guys the money they deserve, as you’re paid for good code.

1 Like

If one thing better than to another (you say this), then we never notice another?

Then you never says, Blender losing users.

If you want more user, if you want support by more platforms, then you must works on this platforms. You never say: Bla bla, Nvidia better etc. etc.

Are you never works a job, a company? When come to you a job, are you say, but I didn’t like this, this not better this, then I can not program this?

I prefer OpenGL, because good than DirectX. But I must use either. I can not say OpenGL is best, then I never use DirectX. If I don’t use, then program not works properly on some cards, and some systems. This is not choose, this is necessity.

And you never hear monopolism, as I understand as. Why everybody don’t use Maya? Maya better than other 3D softwares. Why you use Blender? Pay those guys the money they deserve, as you’re paid for good code.

Heres what could easily happen in the near future.

Nvidia has a bad generation or two over a span of 5 years, Nvidia cards are more power hungry and lower performing than the competition ( in games).

Nvidia looses its gaming card revenue to AMD and Nvidia shareholders complain so its cranks the price (potentially 2 - 3 X) on all its products to squeeze profits from professional/creative markets/users.

Us Blender users are stuck because of CUDA and RT cores architecture.

Blender growth slows dramatically as 90% of new potential users are part time gamers, and have all switched to AMD for their superior gaming FPS…

When young gamers try Blender they wonder why half the features dont work and their render times take a long time compared to the pros using Nvidia hardware.

And incase you havent noticed already, AMD as it begins to dominate over Intel has significantly increased their prices. New theadrippers already this generation are potentially out of the price range of Blender hobbyists and semi-professional free-lancers… If Intel continue to screw up,and AMD gains monopoly expect serious price increases.

And imo Intel are already making huge blunders, with their Big-Little architecture, essentially going back to quad cores, with 4 extra tiny little Atom like cores… The problem is those little cores dont support AVX 512 or other powerful floating point type instructions that power rendering engines.

1 Like

The 4870/5870/7970 cards used to be really competitive with nvidia cards in gaming, had a lower TDP and a much lower price point. It was at that time that GPU rendering started to become a thing - only on nvidia because AMD/ATI drivers were too bad even though the hardware in itself was capable.

As for the rest,I neither see Blender´s growth tied to “young gamers” nor do I think Threadripper products are out of price. If anything these chips are a bargain.

I saw some websites floating the rumour that Nvidia is currently busy talking third party GPU manufacturers into raising their prices for the upcoming GPU generation.

Have not seen anything from AMD in years though that would have justified the expectation that they’ll somehow stomp the competition. Wasn’t Navi already supposed to be the big improvement? The 5700 doesn’t look like much of a winner if that’s the best that came from that.
Anyway this seems like a small company taking on two behemoths. They may have started to corner one but do you really expect them to dominate another field, too and so soon?