Render layer vs. image node - not the same result

Hello everyone,
I am new to compositing in Blender.
I have prepared a simple pipeline for Glare adjustment in compositor. I am working with an image sequence, so I rendered one frame to try some set up. After the render I plugged it straight from the render layer node, and the result was fine.

When I tried to repeat it with complete rendered sequence (.jpg) from Image node, the Glare was barely visible. As a new user I cannot append screenshots for a refference, so I hope this explanation will do.

Is it that the freshly rendered image has more data then saved .png or .jpg although there was just a Combine pass?

Thank you very much
A.

Do you have the Compositor checkbox turned on in render settings?

Yeah I got the checkbox Compositing under Post processing and Output propperties active. Didn’t find any Compositor node under Render setings.

Yes, the fresh render does in fact have more data stored than a png.

The difference is what’s called the dynamic range. A png file has a maximum brightness it can store. If you go in the image output settings, you will see a setting called “color depth”. The png format has a choice between 8 bits per channel and 16 bits. The 8 bit option is a standard image file, like you would see on the Internet, but the 16 bit option has a better brightness range, which means a black to white gradient would be more detailed and would better support color correction.

However, if you save the image in the openexr format, you will get the “full float” option (32 bits per channel). This can store any level of brightness no matter how bright and can contain the raw output of the render un-altered. However, the file is heavier and image editing apps like Photoshop won’t be able to fully display the brightness. Then, If you load that kind of file in the compositor, it will be the same as the render layer node.

Even better, there is the multilayer exr format that’s available, which will also include every render pass, so it’s basically the exact data that gets sent to the compositor, but saved as a file.

The reason your glare is barely visible right now is because that effect relies largely on pixels that are too bright to be displayed properly on screen.

An other use for images with high dynamic range is HDRI lighting. They are 360 degrees images that can be used as background in a 3D scene and they have a large range of brightness stored in them, so they can be used to accurately replicate the lighting conditions that were present in the real world location.

1 Like

I had a suspicion it has something to do with .exr. I tried multilayer half float, but there was not much of a difference. Now I plugged 32 bit .exr and the result is really like after the render. Hope it will work with the whole sequence in some decent time and doesn’t consume my entire storage room.

Thank you!
A.

1 Like

OpenEXR is specifically intended to be a data file: capturing the exact floating-point numbers produced by the rendering process, with no loss. (Yes, the files are big, but that’s not the point.) The format was originally designed by Industrial Light & Magic, while the “multi-layer” extension actually came from the Blender Foundation and was quickly made part of the standard. There is one file per frame.

Use EXR files throughout your pipeline until you come to the final stage of producing “deliverable files” – image files and movie files for display on the target hardware. Make the “final renders” as OpenEXR, then create one blend-file for each deliverable.