AMD's (ProRender) Plugin for Blender

doesnt work.

I have never seen such an unprofessional attitude. Don’t worry about your renders. Work on your manners. With your current attitude no one will hire you anyway…

An unprofessional attitude is offering a product or service that simply does not work as advertised.

Thanks for your concern but I run my own business and my order book is full of satisfied returning customers. Because when I say I’m going to do something, I do it.

The clue is in the name, ProRender. P.R.O. It’s supposed to be a reliable professional product, if you want to stand applauding woefully buggy software then you deserve the second rate experience it currently offers.

1 Like

If you want to stop the black square bug turn the adaptive sampling off.

Turn on Separate Viewport and Preview devices,

In the viewport render settings turn the Noise Threshold to 0.00. This will stop the image disintegrating.

1 Like

I seriously doubt that if your stressing over free software when you can buy any renderer for about $500. And I will tell you when I am applauding.

Seems an odd standpoint to take on a Blender forum. Again, the fact ProRender is FREE is not the issue.

I want to extract my company from its reliance on nVidia because nVidia has an abusive relationship with their customers. The quickest way to $2000 GPUs is to keep buying nVidia.

I want to see that ProRender is production ready before I buy several workstations worth of Big Navi GPUs later in the year.

The whole reason for ProRender to exist is to give customers a reason to buy AMD GPUs for 3D rendering. AMD aren’t doing this for a laugh or generosity.

So you want to get out of an Abusive Relationship by Being Abusive to the talented people who are making you a free tool so you can escape. And don’t be so sure they are making ProRender to get Nvidia customers. That is an assumption on your part. There are many reasons to write these tools and they had No Obligation to share them with the public.

I’m not being abusive to anyone.

Come back when you understand the difference between objective criticism of a product and being abusive towards people.

I don’t think you’re very clever so our conversation is over.

Well Fortune 500 companies disagree. And I will decide when my conversion is done not you.

:rofl: :rofl: :rofl:

Jog on pal.

I have constructed a very simple scene for testing Adaptive Sampling now that it appears to work however I’m getting odd results.

With Noise Threshold set to 0.05 I get a noisier image with one GPU than when I use both GPUs. The one GPU render while noisier is only 2-3 seconds slower than the two GPU render. Surely with a noise threshold of 0.05 the noise in the image should be the same regardless of the number of GPUs used only there should be a longer render with 1 GPU?

I have also noticed there are some blotchiness artefacts when using adaptive sampling, around the edge of this cube below you can see stair stepped regions which are lower in noise to other areas. In the two GPU render the noise is much lower and the so the difference between the regions is less noticeable.

In the image below the noise threshold is set to 0.01 and there are easily visible areas of blotchy graininess. FYI, The Intel denoiser has been designed to work best with a more uniform noise or grain profile. I find the sample noise of ProRender to be far more aesthetically pleasing than the courser grain noise of Cycles and would only rely on a denoiser if a client demanded it so I’m going to keep the adaptive sampling off as the blochy grain has a far less natural look.


You are correct. This is a bug.
There will be a fix posted in a day or so.

I’ve been battling this one out with the Devs for a while now. The latest patch I have seems to fix it for mGPU.

What you’re seeing is Adaptive Sampling is turned off internally for mGPU hence the less noisy image and longer render time.

If you do the test on one GPU with Adaptive Sampling ON then OFF you’ll see the difference working.

1 Like

Thanks @Gelert for that explanation.

The patch before this current one definitely behaved as you describe. I could never get the Adaptive Sampling percentage value to appear during rendering with multi GPUs.

So far ProRender’s performance with multi GPUs is disappointing compared to Redshift which is almost linear in its performance with additional GPUs. I do wish I could send a single frame to each individual GPU and CPU in the system during render.


edit. Can confirm everything Gelert says is correct. Even though with multi GPUs the renderer does say it’s Adaptive Sampling it actually isn’t. This explains poor multi-GPU performance. So with a bit of luck a new patch will unlock quite a bit more performance which is always good news!!

That’s the point I kept making to the Devs. The performance of multi GPU isn’t disappointing at all. It actually scales very well. It’s just that mGPU was switching back to no Adaptive Sampling so it appeared to go slower with 2 GPUs than it was with 1 GPU with Adaptive Sampling even though as a user you left the settings the same. (That’s the problem with bugs!!)

Turn Off Adaptive Sampling and test with 1 GPU and 2 GPUs with the same settings and I promise it’s faster with 2 GPUs.

They are getting there with mGPU and Adaptive Sampling but it’s not quite perfect yet.

1 Like

I was updating my findings as you were typing.

I totally agree, once this adaptive sampling bug is fixed performance should be very good indeed.

builds are updated on AMD.com with fixes for Hybrid and adaptive sampling: https://www.amd.com/en/technologies/radeon-prorender-downloads Zip file will be named the same, when you unzip though it should say version 2.3.4 vs 2.3.1

2 Likes

Thank you for your swift turn around of this issue.

The update has virtually halved render times and I can now render out with the Hybrid modes! Thank you.

To be able to render out with the medium hybrid mode will be an absolute godsend.

This doesn’t work for you? What GPU/OS?

It seems to me that AMD cards scale well but not NVIDIA.

NOTE: I am still somewhat surprised about what the RX 5600 XT (180 Watt) pulls off vs the RX 5700 XT (241 Watt). There difference is there but not that much.

macPro 2010
macOS Catalina Metal
1x RX 5600 XT 2:39 (first run)
1x RX 5600 XT 2:01 (second run)

macPro 2010
Windows 10 openCL
1x RX 5600 XT 3:06 (first run)
1x RX 5600 XT 2:53 (second run)

macPro 2010
Windows 10 openCL
1 x GTX 1070Ti 2:59 (first run)
1 x GTX 1070Ti 2:26 (second run)
2 x GTX 1070Ti 2:02

macPro 2013
macOS Catalina Metal
1 x AMD FirePro D 500 3 GB 5:01 min
2 x AMD FirePro D 500 3 GB 2:53 min

Sapphier RX 5700 XT

macOS Catalina: Metal
First Run : 2:19 min
Second Run : 2:07 min

Windows10: openCL
First Run : 2:18 min
Second Run : 2:16 min

This is also an active area of development for the RPR core renderer. Hopefully it will only get better!