OpenEXR export, but standard dynamic range?

I’ve thoroughly grown interested in using OpenEXR for my general render exports (I love the versatility and space saving) but my biggest issue up to this point is the general lack of information about the standardized HDR implementation. Once rendered an EXR sequence is HDR by default meaning any encoding software will try to squash the color space to standard colors and it will always look AWFUL. I’ve tried a few quick fixes such as gamma correction but they always never look identical to straight png export.

Help me out here, how does one encode an exr sequence to another format for playback on SDR devices and looks correct?

It really depends on the external “encoding software” you are considering.
You can consider OpenEXR exports as precise float, instead of a simple integer uint8 or uint16 like in a .png. Then, exporting to 8 bits .png, blender would scale the whole color space to get 0-1 fitting into 0-255 and crop it before save. If you choose 16 bit it would scale 0-1 to 0-65535 and then crop it into the .png file

But going from float to utin8 or utin16 they, are a lot of different (other) ways.
Up to you to figure out how your external software works and how to tune it. (Maybe it scales it from min to max instead of cropping, etc…)


Also, you could try to clamp and normalize your render before feeding the .exr output node.
That way, if you use a standart “normalize” param in the other software, it should behave at exactly the same final color space


See you :slight_smile: ++
Tricotou

Clamp and normalize?
I do recognize normalize as a compositing node but clamp? Is that in render settings?

If clamping and normalizes imply compositing, what nodes?
If it implies color management I can’t change the transform from Filmic as this is a photo realistic project for work, everything else is default.

By clamp he means enabling this option on the mix node:
clamp

It keeps values from going over 1.

Thanks, I’ll give it a try :slight_smile:

Congratulations on falling into a rabbit hole!

There is no such thing as “HDR” to begin with, unless you mean a type of display or the “.HDR” file format encoding. The confusion begins with conflating models that are very different and unique under the horribly overloaded term. That’s part of your first obstacle to overcome. There are two models:

  1. Scene referred.
  2. Output / Display / Device referred.

The first describes a radiometric series of ratios as given in a scene, which extend from some infinitely low quantity, up to some infinitely large quantity. There are no magic values like 1.0 in a scene referred model, and it simply is another intensity along the full range of intensities.

The second describes something with concrete fixed limits, like a display or a printer or a camera sensor, for example. Here 1.0 is typically mapped or corresponds to the limitations of the device. Going “beyond 0.0 or 1.0” is rubbish.

When you save as an EXR, the software will encode the relevant data as it wishes. In Blender’s case, it is more or less a literal dump of the reference rendering data to the EXR. This is not ideal, but works in limited use cases.

That means that the scene’s colorimetry is preserved as the ray tracing engine generated it; associated alpha with RGB emissions from some extremely low value to some extremely large value.

When you open this in “other software”, it’s up to the other software to harness this data. Photoshop, being a legacy anachronistic mess, is a display referred application, and by default will only display the display referred minimum to maximum encoding range, which is 0.0 to 1.0 in conventional encodings. There’s no magic to make Photoshop work correctly, and that clipping assumption that your data is always display referred is the source of the “AWFUL”; it doesn’t attempt anything to properly render the scene referred data to display, and if it were to try, it would need a whole pixel management pipeline dedicated to rendering those scene referred ratios to your output context.

Blender takes the scene referred range and prepares it for the display referred output encoding based on the settings you provide for the type of display you are on, along with the creative and aesthetic choices you have made regarding the view and look combinations. Those transforms cannot be adequately described by a simple power law transfer function. In fact, no such power law transfer function works well to compress scene referred ranges down to display referred, and even if they did, they wouldn’t work well to gamut map along the intensity range. All values between 0.0 and 1.0 when rolled through a power law function will yield values between 0.0 and 1.0 on output.

So there you have it. Use better software like Fusion, Nuke, Houdini, etc. and understand what the encoded data represents and how it is managed to your display or output!

If you need further help, or have any questions, unleash them here.

PS: The very sRGB specification itself avoids the use of the worthless term “gamma”. It is overloaded and rather meaningless, and doesn’t properly describe the encoding transfer characteristic of a display. The specification was made in 1996(!), so we would all do well to heed the advice.

1 Like