Congratulations on falling into a rabbit hole!
There is no such thing as “HDR” to begin with, unless you mean a type of display or the “.HDR” file format encoding. The confusion begins with conflating models that are very different and unique under the horribly overloaded term. That’s part of your first obstacle to overcome. There are two models:
- Scene referred.
- Output / Display / Device referred.
The first describes a radiometric series of ratios as given in a scene, which extend from some infinitely low quantity, up to some infinitely large quantity. There are no magic values like 1.0 in a scene referred model, and it simply is another intensity along the full range of intensities.
The second describes something with concrete fixed limits, like a display or a printer or a camera sensor, for example. Here 1.0 is typically mapped or corresponds to the limitations of the device. Going “beyond 0.0 or 1.0” is rubbish.
When you save as an EXR, the software will encode the relevant data as it wishes. In Blender’s case, it is more or less a literal dump of the reference rendering data to the EXR. This is not ideal, but works in limited use cases.
That means that the scene’s colorimetry is preserved as the ray tracing engine generated it; associated alpha with RGB emissions from some extremely low value to some extremely large value.
When you open this in “other software”, it’s up to the other software to harness this data. Photoshop, being a legacy anachronistic mess, is a display referred application, and by default will only display the display referred minimum to maximum encoding range, which is 0.0 to 1.0 in conventional encodings. There’s no magic to make Photoshop work correctly, and that clipping assumption that your data is always display referred is the source of the “AWFUL”; it doesn’t attempt anything to properly render the scene referred data to display, and if it were to try, it would need a whole pixel management pipeline dedicated to rendering those scene referred ratios to your output context.
Blender takes the scene referred range and prepares it for the display referred output encoding based on the settings you provide for the type of display you are on, along with the creative and aesthetic choices you have made regarding the view and look combinations. Those transforms cannot be adequately described by a simple power law transfer function. In fact, no such power law transfer function works well to compress scene referred ranges down to display referred, and even if they did, they wouldn’t work well to gamut map along the intensity range. All values between 0.0 and 1.0 when rolled through a power law function will yield values between 0.0 and 1.0 on output.
So there you have it. Use better software like Fusion, Nuke, Houdini, etc. and understand what the encoded data represents and how it is managed to your display or output!
If you need further help, or have any questions, unleash them here.
PS: The very sRGB specification itself avoids the use of the worthless term “gamma”. It is overloaded and rather meaningless, and doesn’t properly describe the encoding transfer characteristic of a display. The specification was made in 1996(!), so we would all do well to heed the advice.