Multilayer EXR file sizes chart

I did some tests with different EXR compression methods with half float multilayer structured output. Here are my results for the lazy.

The bottom two are the lossy ones which have the best compression. What is interesting is that the B44 and B44A methods are lossy compressions but they came out to have the biggest file sizes.

These images are saved after denoise applied.

Please feel free to share your EXR results and tips here.

2 Likes

Once I did same test using Nuke with a sample raw file from Panavision, PIZ was the best choice for lossless. You need to compare quality after compression using a difference operation from original file, in a compositor software. Blender compositor mix node has that operation.

We need to test these further, especially with full floats and additional passes.

PIZ seems to be a middle ground for solution (I have been using PIZ for now) but in my ttest the ZIP ones performed better. I am rendering thousands of frames, so eve couple megabytes difference makes a big impact on storage and further processing during edit and the FX stages.

As far as the data within the EXR file is concerned, you should always use lossless compression because the whole point of the format is that “exactly what goes in is exactly what comes out.”

But I do find that “ZIP” compression generally works quite well for these files. In my experience they compress better than 50%, and of course this format is “lossless.”

However – external hard drives and SSD drives with ridiculously-large capacity have also become “dirt cheap” at any office-supply store. :slight_smile:

I don’t really care how “large” an EXR file might be. The only thing that I care about is that it is a self-describing archive of exact numeric data streams.

1 Like

Well that becomes an issue when one needs to render 100k images where some of those images might not be as important as the others, so it is good to have lossy compression in my view.

One thing I am trying to figure out is the read performance of these various methods. While raw compression performance is valuable, actual reading and expanding in memory is also important for things like compositing and the editing part. I would like to hear your opinion if you tested such setup.

I used DPX in the past but I am more wanting to move to multilayer EXR, however there is one issue with Blender’s implementation. The memory and read write hit of Blender’s EXRs seems to be worse than others, this is an issue mainly with the multilayer EXR.

In this test, PIZ has lowest size for lossless compression. For multilayer files with tech data like vector, uv, normal, depth may vary depending on each file. A city scene would have a larger depth pass, vector would be bigger for a car chase, etc. This experiment don’t apply for render engines that can export deep data.

Did in Nuke using a merge with difference operation from uncompressed file. To spot lost data, gamma was increased by 3.

3 Likes

Three years old and I can’t find this answer either. MultiLayer EXR can be pretty heavy, especially when using CryptoMatte. I’d love to find a comparison of the read/write speed comparison as well.
This is also why you may want a lossy format, and only use 16bit instead of full float. Cryptomatte still works. MultiLayer EXR is just really the only option without spending time with the compositor in blender for different file outputs. But that has a performance hit for write so your render speed increases. If you’re using Eevee you might have a 1 second render and a 3 second save, so slower saves can lead to a lot of overhead.

Based on some size tests I did a while back for just holding the render and both material and object cryptomattes, Pxr24 had the best size to quality ratio. But currently, I’d take some more size if AfterEffects can run things a bit faster.

Anyone have any data for which have decent read back numbers in AfterEffects? I’m not so interested in Nuke yet because, although it may be VFX standard, that likely won’t be in our budget unless we grow enough to have dedicated VFX users (likely never).