Good Practice and Compositing Workflow

Not if they know what they are doing; Adobe products do not work in this capacity. It also has extremely crippling pixel management issues in 2018. A couple of hacks have attempted to resolve the issue (no pun intended), but they have merely postponed the issue further.

The problem stems from the architecture that Adobe set forth with way back in the dark ages; it’s display referred. The EXRs you are delivering are scene referred, much like what a higher end camera can deliver via a log encoding. Given that none of the Adobe products offer a means to apply a camera rendering transform to properly render the scene data, the results end up blindly lopped off at the display referred limitations. There is very little way around this properly, and only hacks and data mangling make it feasible. Even after doing all of that, you bugger up the data for compositing, as it is no longer scene linear data.

2018+ compositing software tries to keep, or at the very least gives you a mechanism via nodes, to keep the reference space pixel values in one state (scene referred linear) and the camera rendering transform (display referred nonlinear) isolated. This means that the math holds together well in the reference with no unwanted nonlinear breakage, and the image ends up “looking correct” at the display output side.

This extends all the way down into the core of Adobe products, where even the most basic blending mode algorithm assumes display referred rubbish. I would add that to this day those garbage algorithms are also in Blender, and are permitted to exist there out of a lack of awareness that they do not work on scene referred data. It’s up to the community to get them removed.

Because of the daft way Blender integrates pixel management, EXRs ignore all colour transformations. While this is problematic on a number of levels, the data is “as it is in reference” when saved to an EXR.

The reason I asked about exposure is that, if it is set via the CM panel, the data remains in its original state in the EXR. If you were compensating for an overexposed image via the CM panel, this would explain why the image looks “overexposed” in AE. However, it sounds like the garbage software simply is dumping your rendered data “as is”, and the hot sources are all being clipped. This is not overexposed. This is garbage software at work.

AVI RAW will bake the view transform into the encoding. This is nonlinear all the way, and will result in broken overs etc. For this project it is what it is. AE is a smouldering dumpster fire of hot garbage. You would be wise to introduce your friend to Fusion, as there is no better time to learn than on a project where it doesn’t matter. For the record, Resolve has some nasty broken bits too, and only Fusion stand-alone allows you to work properly. The issue with Resolve is their internal home brew colour management system, and how they integrated Fusion by bending it around their system. It results in nonlinear complexity again.

Fusion Standalone on the other hand, works excellently.

1 Like