Mac: M2 Ultra - *VR (Part 2)

AMD has fast unified memory in their APUs for PlayStation and Xbox. do they have some deal with Microsoft and Sony forbidding them to use this in their PC APUs? it’s kind of weird that they only use it for consoles.

1 Like

That is a weird one because, as we all know AMD makes CPUs and they also make GPUs. What they don’t make is an OS. So maybe AMD would need to collaborate with Windows to make a version that is optimized for a future AMD SOCs, X86 or ARM?

i am not sure if this would need many OS changes. also Xbox runs some version of Windows and Linux could be adapted by themselves.

1 Like

That’s definitely an exciting idea. AMD has the tech. They are also a customers of TSMC. It would be really cool to see what they could do with non-console SOC tech. The Qualcom SOC is still two years away 2025 and it’s already looking like it will fail compared to the M3 variants. Even when its clocks are cranked to the max like they did for those early benchmarks.

1 Like

A 4090 already does 8-9 sec in Classroom it is half time of a laptop 4070 so is already showing less advantage than the more comprehensive Blender Open Benchmark where a 4070 laptop takes 3x the time.

1 Like

Beside that it really is embarrassing in general …
I welcome an upcoming shit storm for default-base-memory-gate
and hopefully about memory and SSD upgrade prices too.

I remember antenna-gate shitstorm and we got free bumpers
and antenna-gate-free iPhones from then on.

1 Like

RAM and SSD upgrade “inflation” also occur in PC Laptops but not on the level of Apple. I bought for my upgrade 1TB SSD and 32 GB RAM for about 70 euros, if i had went to Lenovo it would have been probably 150 or more.

The outdated part is that the default settings take too short of a timespan to complete. If it takes 9 seconds to render the scene, other factors such as loading the scene take a significant proportion of the render time. The next generation comes out and it takes 4 seconds to render, the next after that, 2 seconds. At that point, what are we measuring here? The card barely has time to stretch its legs.

Run the scene at 2000 samples and double the resolution and then you can better visualize the spread of hardware. There would be no such confusion as “seconds away”. Longer render times could also highlight possible shortcomings of thermal design, etc.

2 Likes

When rendering a single frame, there is no need to increase the number of samples. Simply enable ‘Persistent Data’ in the Render/Performance options and run the render once before the target render. Unfortunately, this may definitely be beyond the cognitive capacity of most youtubers.

MacBook Pro M2 Max (38-core), Blender 4.1.0 Alpha:

Classroom with Persisten Data ON
MetalRT ON
49,39 - 300 samples (6,07 samples/s)
8:13,12 - 3000 samples (6.08 samples/s)

MetalRT OFF
49,50 - 300 samples (6.06 samples/s)
8:12.46 - 3000 samples (6.09 samples/s)

At around 1000-1200 samples the fan noise becomes audible, but there is no throttling.
Interestingly, I found that MetalRT is not at all slower than the default Cycles kernel. So I did tests of another demo project:

Monster Under The Bed with Persisten Data ON
1:08,93 (MetalRT ON)
1:09,36 (MetalRT OFF)

So it seems that MetalRT should perhaps be enabled by default for all Apple SIlicon machines, not just the M3 line.

4 Likes

Now that everyone has gotten the message “Never order MBP with 8GB only!!”
inquiring minds might try to order a 16GB version.
Comparing prices on the internet here in Germany results in:

Prices change all the time and some major discounts may happen later on Black Friday but the pattern is stable over the last days:
A 16GB M3 is basically as expensive as the 18GB M3 Pro.
There is no reason to order the plain M3 at all, except in an iMac or MacBook Air once released (in 2024).

I will switch my MacBook Air M1 8/256GB with a MBP M3 Pro 18/512GB on Black Friday or early next year.
The speed I don’t need because animation rendering will be done on a PC with RTX power anyways but for everyday work including some Blendering I want the Apple system I’m used to even when it slightly exceeds my affordability.
(Congrats Apple your upselling strategy worked once again)
Dammit!!

3 Likes

Gave a quick spin of your settings on a laptop with a RTX 3060.
RT on - 4:48
RT off - 9:04

2 Likes

You were referring to these settings in the quote below? The classroom one with 3000 samples?

If that’s the case, alright, the M2 Max, ignoring ray tracing for now is there in the ballpark (almost a minute faster).
I think we could anchor ourselves on these settings. It’s only one specific scene but those with 3070, 3080, 40X0 and M3s could, time permitting, benchmark that too.

Let me also suggest to not skip the 300 samples one, as it can bring to light how long it takes before the rendering actually starts and/or short burst behavior… that’s why some of my tests some months ago I would even start timing the screen with blender closed.

Had a question on this, do the render statistics start the timing as soon as the render window opens? Or when it actually starts rendering?

Yes, the 3060 had done 3000 samples. I also then ran 300, and the card scored 29 seconds with RT and 54 seconds without RT.

Interestingly enough, I then ran the 3000 samples with RX 6800XT, and that managed nearly identical time to 3060 with RT on. The AMD GPU had no RT assistance, since I do not want to mess with the drivers while there are active projects.

The render time count starts as soon as the window opens, that is, BVH building, etc. is also counted into the time. Unless, persistent data is used, like titanuum mentioned.

Just for the record, all of my tests were run with persistent data.

1 Like

It is not only the price for upgrades.

As soon as you go BTO options, delivery times can go up significantly
and it often means you have no trial period where you can give it
back if you don’t like it after you tested it.

Or some discounters (Mediamarkt?) usually offering base configs only.

1 Like

In fact there is no need to render with more than 300 samples (or default number of samples for other projects) if the Persisten Data option is enabled and measure the time of just second render. Which is evident from the results above.

I think there is no point to use CUDA on PC and report these results. Nobody, apart from old farts, uses CUDA anymore. The situation is quite different for Macs, as MetalRT works for all Macs, but hardware raytracing is only enabled for the M3 family (with more than 8GB RAM). This is definitely a more elegant solution than that schizophrenic CUDA/Optix from Nvidia.

1 Like

Regarding, Cuda/Optix, sure, there is no point to use Cuda anymore. The run was done just for the sake of interest.

1 Like

I also bought a M3Pro locked over a MacBook Air (16GB M1). In short: for working is a bit better but mostly because of the speakers and 2 extra GB of ram (compared to M1 Air, that now I am selling). Both M1 and M3Pro are crazy fast in loading mesh and materials, and for modeling I feel no difference (apart from hand resting position).
For rendering the M3Pro with RT ON is crazy fast, 2xfaster even than M2Max.
I made a post about it here:

4 Likes

Yes, I know. My main point was that the CUDA results might suggest that the M2 Max is faster than the RTX 3060. I mean sure it’s faster in many cases, but definitely not in Cycles.

Reading this thread is as difficult as reading a sentence of any random book in a huge library :))

I feel like all Mac people are on this singular thread, like sheep, talking :))

I could not find yet posts about Virtual Reality :)) not even gonna bother.

You’re right, but try posting anything regarding Mac in another thread. The admin will probably move it in this thread straight away. It makes no sense, but that’s the way it is.