Preserving HDR information in EXR output from Blender in a 8-bit PNG?

The only way to get the full dynamic range out in a render or bake is to save it as an EXR float from Blender. Not even saving 16-Bit PNG float will keep all the information in the highlights in Blender. But many applications call for another format other than EXR. Such as PNGs and even Jpegs.

So is there a way to compress the full dynamic range information from the EXR into a 8-bit PNG using Gimp? So basically all the HDR info in the EXR file can be seen in a 8-bit PNG and the highlights are not blown and the shadows are not crushed. In video, Log does basically that. Squeeze the dynamic range which normally couldn’t be handled by Rec709 into rec709.

Anybody knows how to do that with EXR to 8-bit PNG in Gimp? Again, I’m aware of the limitations of 8-bit PNG, but this is what I need to output to. So my question is really how to compress the dynamic range to fit the 8-bit limit.


I don’t know about Gimp, Google doesn’t return much information about OCIO support, but you can always save your PNG using Filmic Log in Blender.

Yes. I have saved the PNG using filmic. But if you save it out as a 8-bit PNG, it’s all lost. The only way to get the whole info from filmic is to save it as a 32-float EXR. Unless I’m missing something.

Blender compositor has a node called Tonemap, you can use to dilate all 32-bit info, but you will definitely loose HDRi lightning capabilities. Tell more about your other software, maybe there’s another solve.

1 Like

This is impossible thing. 8-bit color space another thing, data compressing is another thing.

As I mentioned, in video we have the same problem. This is why Log was created. To address that very same problem. To fit 14 stops of dynamic range in a 7-stops bucket. So I was hoping the same could be done here.

I have no way around the 8-bit PNG. It must be that or Jpeg, which is even worse.

This seems to be the answer, or at least there are good chances it could be. I’m trying to find any tutorials on Youtube about that but couldn’t so far. So if anybody knows of any good YT tutorials I would love to see it. :wink:

Thanks again!

I once looked around tonemapping information over the web, and found some very confusing posts and videos, like no one really knows for sure how to do it. Simpler way I found was using Blender Tonemap as RH simple default parameters and HDR toning in Photoshop with Equalize Histogram. Second one had better results. You’re probably avoiding propietary software but PS have some tools for RAW files for photography, and can deal with 32-bit file inputs, yet if not converted to 8 or 16, it looses half of all features. If you’re are starting to work with high quality images, should give a try to Natron or Nuke Non-commercial.

1 Like

You are missing something :slight_smile:
Filmic is a tonemapper. Your EXR is saved without tonemapping, it’s the RAW data. It doesn’t care about what you pick in the Color Management, unless you apply a Tonemap node in the Compositing to “burn it in”.
Only 8/16 bits formats apply the tonemapper from the Colour Management. Have you tried saving out a PNG using FILMIC LOG, not the simple Filmic?
Filmic supports a 16-stop dynamic range, I think Filmic Log supports even more, if you adjust your exposure down a bit.

1 Like

Yes. This is what I want. To burn the dynamic range in into the bake. But the reason I output to EXR is because I couldn’t tonemap it to PNG. If there is a way to tonemap the dynamic range to PNG, then I will just do that inside Blender and be done. If I can squeeze more dynamic range in a 8-bit PNG using the tonemapping node, please tell me how. :slight_smile: I couldn’t find any useful tutorials on Youtube.

You got me curious, so I did some testing. Those are the results:

False Color:

For still images I would stay with Photoshop HDR Toning and for animations I would use Tonemap node. I really don’t know for sure, but I think Filmic for Blender is more a LUT than a device transform that converts gamma curves. This need technical research.



1 Like

Interesting results. Tonemapping seems to definitely help. But how to use it? :slight_smile:

Go to Compositing tab, enable Use Nodes, Add or Shift + A > Color > Tonemap. All nodes between Render Layer and Composite output will apply your changes for an animation.

1 Like

Ok thanks. I will have a play with the node.

So basically you only use the Key value and that controls what gets squeezed in? And you control only the highlights? As in you can bring only the highlights down from clipping to fit the 8-bit? Or there also controls for the shadows? I see that there are more controls under R/D Photoreceptor.

EXR 32-bit can store values bellow 0, which is definetly an issue, and above 1. Tonemap node will bring those values inside 0, 1 retaining the best aspect and pixel values you can get. After you can add a Color Correction or RGB Curves node and adjust your black level. The R/D Photoreceptor is a complety mistery for me.

Honestly, not sure what are you trying to do here … Shouldn’t Blender save tonemapped PNG? If you choose Filmic View Transform and Look that works best for you, in Color Management. And you get what you see in renderview. If you use EXR than this tonemapping process will be skipped and you can use Filmic OCIO in outside app to tonemap it to same look you have in Renderview. What exactly you need?

1 Like

I’m with you on that one. Blender saves tonemapped images for 8/16 bits formats other than EXR.
I don’t see the point in applying Tonemap to the EXR, except that the Tonemap node has different options than the Colour management panel.

Your scene doesn’t even have enough dynamic range to display the advantage of using filmic. Place strong light sources in the frame to see its effect.

If by LUT you mean grading LUT, then no, Filmic is a tonemapper. You can do whatever you want with a LUT honestly, a Look Up Table can hold many different type of information.

1 Like

Fairly confused here – but, here’s the bottom line. First, you need to get your “final cut” video to OpenEXR format. Then, you need to create separate blend-files whose sole purpose is to create appropriate “deliverables,” such as PNGs or MOVs.

These blend-files – one per “target” and fully customized for each – will necessarily use the node-based compositor, to apply tone-mapping and perhaps other data manipulation at your artistic discretion, in order to “make the necessary decapitation as bloodless [-looking …] as possible.”

I say “artistic” because these are going to be very-artistic choices: there is by definition no direct path from one to the other, because the target is (by design) not capable. You’re going to have to make the final PNGs or MOVs “look as good as they can, according to you.”

Fortunately, your data source is brimming with detail.

You’re going to be dealing with, not only “color gamut” issues, but also “compression.” A key objective of both of these formats is to create small files that cheap hardware can use to produce good-enough images. “LUTs” are a fundamental part of that scheme.

And – you’re just going to have to decide. The computer cannot do this for you. Just try to make the output look as “decent” as you can.

Same test with the HDRi. I often used Filmic, and I’m glad that someone created. Yet by testing, filmic it’s trying to get same results as a scene referred camera log footage, and looks like a LUT. The point of this post is to preserve information from a 32-bit to 8-bit, log footage preserves light information using less space than RAW. Tonemap node even retain part of the blue sky.

1 Like