Mac: M2 Ultra - *VR (Part 2)

Ideally both :D, competition is always the best for customers.

Nvidia not having to compete, got away with a lot of BS. Apple entering their playing field is great since AMD doesn’t seem capable of competing.

4 Likes

So true it was a joke what NVIDA added only to my rtx 4070

2 Likes

That is interesting.

The Ultra option is quite pricy - but then so is the RTX 4070 Ti - 4080

As a plus you also get more CPU cores :wink:

M1 macStudioMax
31 sec

PC 4070Ti
23 sec

1.3GB Industry File

That was an interesting discover how with such a heavy file the time difference shrinks - possibly because of data preparation before sending it to the GPU.

7 Likes

Just for curiosity, - and boredom, I admit, - I’m compiling data from Blender opendata, considering Blender versions from 3.3 to 3.5, and from Geekbench, single core, multi core, and GPU charts.

Two informations are apparent:

  • the increase of performance, per generation, is substantial; just looking at M1 max and M2 max, for instance, it varies between a +67% and a + 76% in the two versions offered in each generation.
    This is not of course a variation per core, but a variation on what Apple sells in the same price range.

  • The optimization of Metal and the resulting increase of performance per Blender version is staggering, : M1 max 32 cores went +5% between 3.3 and 3.4, and 20% between 3.4 and 3.5; M2 max 38 cores went +4% between 3.3 and 3.4, and +27% between 3.4 and 3.5.

Currently, the first result on opendata M2 ultra is already ~+50% than the one of my 2060 Super, which while not current, is usually good enough for me, while Geekbench result for GPU sets it between the 3090 and the 4090.

Seen in a different way, my PC workstation, 32 GB ram, + RTX 2060 S, costed me ~2.200€ between 2015 and 2018; a Mac Studio with M2 max, 38 cores, and 64 GB of ram would cost 3.139€ today.
While the latter may be slightly weaker - currently - in Cycle rendering, (~1917 points vs the 2060 S’s 2301 points), it surpasses my workstation in every other way.

With this, and the info from the recent WWDC, plus the new attention on Windows games porting, I’m convinced that there’s a real effort here on engineering and commitment by devs, and I expect that M3 will be a blast.

6 Likes

i found similar. on large real world projects, its faster to render on my m1 max laptop than my 2080 ti tower. the m1 max is definitely faster in the preparation phase of the render before sampling starts, which has more to do with cpu performance, but even once it gets to the sampling itself, the m1 max edges it out. :slight_smile:

2 Likes

It is clear that apple knows what they are doing and work on a long roadmap.

I have to say I am impressed with their development and security.

Switching all to their chips makes logistically sense but also means they have to do it all now from cpu to gpu - that’s to me quite some pressure.

Considering also that very often we compare the apple GPU to high end cards this ignores or the mid level and low end GPUs people also have.

So for gaming it makes sense now to push this tool because gaming performance is usable now.

That was quite a bold move to say good bye ti intel but I have to say looking back I had no Limitation and all went smooth to better.

3 Likes

Currently on opendata: Apple M2 Ultra (GPU - 76 cores) METAL 3412.29 1 test :slight_smile:

4 Likes

I see that first M2 Open Data score is up, really sizable jump over the M1 Ultra. Another telling note is the gap between the M2 Ultra and the M2 Max compared to the M1 Ultra and M1 Max, the scaling seems to be much better.
Just the Metal filter applied…

5 Likes

When I increase the sampling count to 500 then the PC is at 25 sec and the Mac is at 40 seconds.

So yeah the mac is very fast at preparing the scene and when using not many samples will in the end be equal to a decent PC GPU combo. The PC I use is rack PC (server) on my university network.

Apple is slowly getting closer and closer to Nvidia. :smiley:
The problem is, is that even Apple’s goal?


:face_with_monocle:

2 Likes

Yup, Gen 2 and they are getting closer. Who knows what happens when/if ever RT cores come into play. May even be pushing 3090 numbers if that ever happens?

1 Like

I’m afraid Apple wants to get its 3D glasses ready for now :smiley:
By the way, my RTX 2080 Super in the HP PC has now been overhauled.
That’s great! :dizzy:

  • Only the price for the box would then simply too high! For this you get the top device with Nivdia card. Apple!!!

On Cycles, of course the gap is still there, but RT cores will come, I’m sure; it makes sense, even if we wouldn’t consider the professional application in rendering. Apple could not commit to pushing games to their lineup without having some kind of counter to what Nvidia has done in the last few years.
And the timeline makes sense too, from Silicon introduction to now.
If else, I’m more concerned by supply chain stability (and the rest that is happening in civilization).

3 Likes

In other news, reports that the M3 chip is getting prepared for 2024 Macs are starting to appear online.
Saying that, @Midphase was totally correct and I was wrong about when the M3 chip would show up.
I was certain that with M3 going into production in February that we would see them this year, but I guess not.

4 Likes

Nvidia isn’t standing still though. I mean sure by the time M3’s roll around in about a year or so we will likely have RT cores added, but at that point Nvidia will be boasting about their 50xx series of GPU’s which might have some even newer tech that we’re not aware of.

I think Apple is going to be playing catchup to Nvidia for quite some time, and I suspect that Apple is ok with that. They’re mostly fine with not being the top dog when it comes to graphic performance.

For me, it just comes down to whether one of these new M2 machines provides a compelling reason to plunk down the cash or if I’m still better off sticking with my Hackintosh.

5 Likes

I’ll root for apple vision pro less for that specific product and more for them to keep pushing into spatial computing. Pushing 23M pixels of 3d content @ 90fps as a first gen baseline means they’re going to want to keep investing in their gpus. :slight_smile:

5 Likes

I guess next year Apple will be here? :face_with_head_bandage:

I think with RT cores it would do quite the jump upwards. Without them (and also without the potential use of the neural engine to clean up the noisy rays) it’s like trying to compare a CPU 3D-software rasterizer vs the same done the hardware way by a GPU.
Just like the old days when GPUs started to make an appearance.

3 Likes

In fairness, literally anyone who isn’t nvidia isn’t going to look any better on a graph like that. :wink:

And I’d argue at least for cycles, metal has felt less buggy than amd HIP/openCL. I had a Vega 64 for 3 years prior to my M1 Max and it didn’t even work in cycles for a large chunk of its lifespan.

I should break it out again and see how it’s doing with macOS metal these days, though.