Mac: M3 - *Hardware accelerated RT (Part 1)

Not sure did not try Octane.

I am more hoping the next Renderman will work on Arm, especially as they made their own AI denoiser.

Luxcore render unfortunately still does not work either.

Moonray looks like it will be Linux only once released.

Looks like at the moment Octane and Redshift are the only alternatives.

1 Like

Yeah. I used to work with Luxcore, Renderman and Keyshot too.

Luxcore development seems to have come a bit to a halt lately. Not much activity at the forum and the Discord. :neutral_face: I read something about an imminent major announcement. Maybe theyā€™ve sold the renderer to Autodesk for $ 100000000000. :stuck_out_tongue_winking_eye:

Renderman is a very powerful renderer, but Mac support kinda sucks from what Iā€™ve read to date. No Metal or Apple Silicon support.

Keyshot is also a great renderer, but theyā€™ve gone to the subscription dark side, and it already was too expensive and not integrated in Blender. I own a full license and two ZBrush version licenses, but for slowly outdating non-subscription Keyshot versions. :unamused:

So yeah, Cycles, Octane and Redshift are the three best options for advanced Mac rendering. Redshift is also on the subscription dark side, so that leaves Cycles and Octane. And as you mentioned, Eevee Next is an interesting prospect. Iā€™m eagerly looking forward to the implementation of Screen-Space GI.

1 Like

Iā€™m using a 6900XT 16gb AMD GPU for rendering in Cycles and it works great. If you search some of my previous posts from a few months ago, I posted some render times and I was very pleased with the results. It blows away anything that Apple Silicon can do at the moment speed-wise.

3 Likes

Iā€™ve heard that Blender supports GPU rendering for Radeon cards now, but how does it compare to Optix?

If I had to guess close to Cuda speeds? About 50% of Optix, of course depends what card you compare it to.

His test were why I nearly was tempted to but an old Mac Pro and put an 6900XT in :wink:

2 Likes

Yeah, I have no idea about Optix, but there is obviously some hefty optimizations going on in the code since Nvidia seems to have thrown a lot of resources to Blender. Pretty much every single Developerā€™s Meeting has an Nvidia representative in attendanceā€¦the same canā€™t be said about Apple.

2 Likes

I took the time and tested it same scene on 3 machines (all blender 3.2.2).
1800 Samples 1920 x 810 px

For 1 frame
RTX 3080 Ti 1:43 min (Optix of course)
RTX 3080 Ti 3:01 min (Cuda)
M1 14 core 27:57 min
M1 32 core 14:42 min

3.3beta from today.
RTX 3080 Ti 1:34 min (Optix).
M1 14 core, not sure has an interesting shadow issue that is not visible on the 32 core, I blame the Ventura Beta? But only in the 3.3 build 3.2.2 and 3.4 are fine.
M1 32 core 13:09 min

I guess they are both faster in 3.3 but 93 sec on the 32 core not bat :wink:

This is where I think power efficiency is all nice and fine but that is horribly slow.
Change it at least a bit for the desktops, please Apple.

This is where the only solution that I can think of is for Apple to re-enable eGPU. There is simply too much ground to catch up on with Nvidia when it comes to this stuff, and theyā€™re about to release the new 40xx line very soon, which will make that gap even wider.

The way I see it is that AMD GPUā€™s are the only way out of this jam.

I agree, especially if the 4060ti is faster then the 3080. How knows how that will stack up.

I am just scared that Apple will be too ā€œproudā€ to bring back eGPUs or even AMD GPUs in the future Mac Pro (even that I still hope that they do at least the last one).
Actually not sure if they are to proud or to control hungry or both.
Just remember thisā€¦ In what paralell universe? Is what I think at the moment.

Oh nevermind we might have an ideaā€¦
https://videocardz.com/newz/nvidia-geforce-rtx-4060-ti-rtx-4060-get-first-rumored-performance-claims

1 Like

Oooh mmh interesting.

Metal Backend

  • New version of the patch was uploaded today.
  • Still some notes havenā€™t been addressed, but the main idea is there. We discussed to accept the patch as the left over notes are mostly code styles. It is faster to do these in master due to the time it takes (manual steps) to upload a new version of the patch to developer.blender.org .

ref.:https://devtalk.blender.org/t/2022-08-29-eevee-viewport-module-meeting/25640

I guess that will be in tomorrows build.

4 Likes

I using Macs for almost 90% of my work. I havenā€™t had to render any final image or sequence on my computer for ages, usually hand that off to someone else and they handle all that stuff. And from what I know rendering is not done on any one computer with a 2080 lol :slight_smile: ā€¦ it usually takes a team of servers to pump out the animations we work on. And for the most part it is almost always done on CPUs because of the RAM requirements. :wink:

Only thing Iā€™m rendering on my machine is playblast and even my M1 mini is handling that fine.

Once the Metal viewport is done and dusted, my hopes for GPU optimization will lay with companies using the different Apple cores (i.e. ML & GPU) for simulation work. Not too concerned with render speedsā€¦

Iā€™m pretty much over the whole ā€œwhat card are you usingā€ as being the dominate question when some one post a single render online, lol.
Now that more and more companies are making Apple Silicon ports; getting to stay in the MacOS environment through my entire workflow is waaaaay more appealing to me than those three minutes I would have saved on an image I worked on for three weeks. :slight_smile:

3 Likes

Using an m1 pro 13 inch and have a windows tower running a 3070 nvidia that I access as a Remote Desktop. Rendering and unreal engine are too slow on the Mac but everything else flies!

5 Likes

Apple adheres to their retina display standard which on desktop is about 218-220 ppi. Apple for the most part uses higher resolution to increase pixel density at any given scale (5k is double 1440p for example), 4K is double 1080p. So unless you want to deal with extremely tiny text and icons etc, then you have to pay attention to how macOS scales everything. For example if I choose 1440P as my resolution on macOS. What macOS is doing is creating a 5K buffer in the background then downscaling to 4K so you are giving up a bit of performance. Not a big deal for the most part but its a performance hit nonetheless.

As you can imagine if you are doing 3D work having a 5k-6k buffer that then downscales to fit your screen is going to affect performance. If you only do native 4K with no scaling then there is no issue but most people donā€™t do that. So like I was saying if you want an ideal monitor on macOS without resorting to scaling and all that entails on macOS 1440P is the way to go.

I donā€™t do a lot of 3D on my M1 Max ( I do a lot of audio work), the performance hit doesnā€™t affect me at all unless Iā€™m running really heavy sessions and even so its better than the Intel machines with IGP.

3 Likes

Does anyone else find it strange that Apple has done 2 Bootcamp updates recently?

What are they up to :thinking:

1 Like

They realized they will fail with Apple ARM because of scaling issues
and plan a transition to AMD X86 CPUs and GPUs during the next 5 years ?
I heard through the grapevine that Michael Jones is called to lead a
new team, designated porting Ventura as a Desktop on top of Ubuntu ā€¦

3 Likes

Haha good one we need a laugh for the posts not just a heart :slight_smile:

1 Like

If it is horrible slow then it might not be power efficient. More time pushing to accomplish a mission implies more time consuming energy and if said time increase is significant.

Indeed, I wonder if Apple has any idea how to narrow the performance gap to Nvidiaā€™s hardware. Even if Apple increases the number of GPU cores in its monstrous M2 Hyper Ultra chip by several times, performance in raytracing will still be poor, and quite pathetic compared to multi-GPU configurations.
Iā€™m a bit worried about the rumours that the new Mac Pro is only going to have one PCIe slot. That would mean that Apple is unable to produce a ā€˜realā€™ workstation processor. It would be even worse if the Mac Pro had no RAM expansion slots. That would be a total embarrassment.
On a slightly positive note, Apple realised that the M1 Ultra was far too weak for the Mac Pro and postponed its release, introducing the Mac Studio instead in the spring.
Appleā€™s M1/M2 laptops are generally great, but the real technological challenge is the Mac Pro. Iā€™m rooting hard for Apple, but I have big doubts that they can deliver.
There will always remain the option of a PC crap connected via remote desktop.

4 Likes

Apple always said (since the Trash Can Dilemma, 2017 or earlier ?),
we ask our artists and want to see what exactly the real bottlenecks are,
like in one case it was just a GPU driver issue ā€¦

ā€œWe wonā€™t just throw more raw power on itā€
Which made me afraid of something like my Mac Pro 2.1 cheese grater
never coming back from Apple.

That is exactly what they did with Apple ARM.
Creating a new Architecture by avoiding all typical X86 bottlenecks.
Plus they optimized their OS and APIs.

And Apple expects that all 3rd party Software does optimizations.

Therefore I think Apple still isnā€™t really interested in comparing themselves
with Nvidia and such at all. They ignore raw power completely.

But as expected the 3rd party Software optimization part may not happen
as fast or effective as Apple would like to.
And for me it looks Apples Blender engagement showed that even their own
OS and APIs and maybe even hardware arenā€™t as capable and optimized in
3D as Apple may have believed so far.

1 Like

But Apple has a great advantage over Nvidia with ā€œuniversalā€ memory.
So it could take a huge chunk of creative and even gaming market if they had competitive GPUā€™s/code.

2 Likes