This is anachronistic lore. macOS uses the canonized sRGB transfer functions (IE the sRGB OETF) across the board now, even on their late 2015+ DCI-P3 displays. 1.8 power functions have long since been dead. By “long since dead”, the 1.8 power function was dropped in OSX Snow Leopard 10.6, circa 2009.
To make things (hopefully) clear…
The default Blender colour management configuration is a Lovecraftian shibboleth filled with poor terminology, broken transforms, among other problems.
When one selects a “Display”, one is selecting the colorimetric response of the Display listed. In the Filmic configuration, this is very clearly listed as sRGB 2.2 power function hardware. This means that all views are designed for that particular idealized display.
Sadly there is quite a bit of confusion on the part of the developers, and the terminology in the official configuration, as well as the design understanding of OCIO, leaves a little bit to be desired. Hence the vague “sRGB” title and “Default” show up in the configuration, which leads down these confusing paths as we can see above.
The TL;DR is that under OCIO, “Display” selects the idealized display colorimetry and response of the type of display in question. For Filmic, this is an idealized sRGB display with a 2.2 power function in hardware. All Views and Looks are designed for that idealized display. If one’s particular display deviates from that (EG Apple P3 or otherwise) one would require alternate Views and Looks designed.
[Happy to answer any other questions. I am a little leery of the term “gamma” as it tends to make folks more confused than they should be, and worse, it is used in improper ways all over. For those interested, to cut through the loads of rubbish out there, the transfer function of a hardware display is a matter of encoding needs, not perception. That is, the transfer function in typical sRGB display hardware is a pure 2.2 function at the hardware level. This is a means of compression that cheats the encoded values in a mostly imperceptible manner, so that less bandwidth can be used to get something to the display. That is, the values end up nonlinearly encoded on the software side, are fed to the hardware which then decodes the encoded values back to physical linear light emission output that your display is capable of. Many people get confused and think that they are nonlinearly encoded for our perceptual system, which is a horrible misunderstanding; the nonlinear encoding is always undone back to physical linear output and the perceptual link is simply a compression / bandwidth hack / design optimization.]