Mac: M2 Ultra - *VR (Part 1)

At last, the rumors about a more powerful Apple Silicon iMac are starting to surface. :+1:

1 Like

Still find it a bit strange unless it does something different :thinking:

Here is what Maxon said about the M1 Max

Is blender swapping Vram or some magic?

i think we talked about this already more at the beginning of the thread. it’s good texture compression not some penalty-less swapping magic.


I have no doubt that Redshift is correct with their assessment, but 24 seems like a loooot of RAM to leave on the table for system task, especially if you have to leave the RAM “modules” and start using swap.


Its quoted FP32 compute performance of 10.4 Teraflops is higher than that of some previous-gen Nvidia desktop GPUs, including the GeForce RTX 2080 gaming card and the Quadro RTX 4000 workstation card.

In addition, the 64GB of unified memory available to a M1 Max inside a maximally specced 16-inch MacBook Pro exceeds the on-board memory of even the current top-of-the-range Nvidia RTX A6000.

In the case of the MacBook Pro, that memory has to be shared between CPU and GPU, but it still potentially increases the size of the 3D scenes that can be rendered inside graphics memory.

According to Apple product line manager Shruti Haldea, the 16-inch MacBook Pro “work with scenes and geometry the latest pro PC laptops can’t even run”.


5X faster with the Pro chips and DaVinci Resolve.

Support for Apple M1 Pro and M1 Max

  • Hardware accelerated Apple ProRes on Apple M1 Pro and M1 Max.
  • 120Hz support on Apple M1 Pro and M1 Max for smoother UI and playback.
  • Faster DaVinci Neural Engine performance on Mac OS 12.
  • Native HDR viewers on supported Mac hardware.
  • Native full screen mode on Mac.

120hz timeline is going to be pretty cool I must admit. :nerd_face:

1 Like

Does blender have HDR capabilities? Cause it would be cool to render on the macbook and view the image in HDR.


Nice summary of the information so far:

1 Like

I keep seeing compute comparisons between M1 Max and existing gpu’s. But is that the whole story? How much influence will the OS and graphics API have on render times? Could it theoretically be that scenes render faster on M1 Max than on an RTX 3080 (not mobile) on Windows or Linux?


I think the results can vary a lot from app to app. We’ve seen before that the GPU in the M1 can behave “weird”. There is no other architecture like this on the market. For some applications or even separate actions fast unified memory will be a critical factor. Next few weeks will be crazy.


I do not understand a word but nice to see anyway.

At 5:50
When compared next to the old 16 the new display with notch looks sooo much cleaner. Ok I’ll give it to them the notch was a great idea. :nerd_face:

1 Like

I’m still half and half on it. It’s nice until you run a program that has lots of menu entries in the top bar, then it looks kinda sloppy.

Agreed but i think developers are going to design with it in mind now. I’m wondering how resolve is going to rework it?


File. Edit. Window. Render. Help. / Layout \ / Modeling \ / Scul \ nOtCh / pting \ / UV Editing \ / …


The macOS menubar is the only thing that gets rendered in the region shared by the notch. Blenders own menu will always sit below it. Even in full screen it’ll be blacked out creating the same forehead bezel as the previous design.


Blender can take better advantage of this. Before notch you either had to have so work area taken up by the menu bar, then dead space, then work tabs.
Or just full screen and miss out on the menu bar.

Now, we can have the menu and full screen without losing real estate in Blender.

Current situation for Mac users:
Have Menu bar but loose lots of screen real estate.

Or full screen and miss out on menu bar.

New layout would allow for that menu bar to stay up top while in full screen !!! :slight_smile:

And thinking about it now this could happen for DaVinci Resolve if they would set up their layout like Blender’s.


Does anyone know if Blender will run native on the M1 at some point?

1 Like

It already is and has been for several months.


Oh…I thought I just read someone saying something about a benchmark but with Rosetta…

Does anyone have benchmarks on the native code on the new chip?

thanks for any answer…I may buy one of these…trying to understand a little bit ahead of time…since I will be using Blender with it