Blender color management and compositing problem

hello, i have a question about color management in fusion:

what should I do if I have a blender render in exr, and I want to blend that with stock footage and pngs in srgb color space?

i know I should convert the stock footage to linear first then blend with the blender render. BUT after the compositing i have to convert the blender render to filmic srgb by adding the ociocolorspace node at the end, then problems appear, the stock footage seems washed out. if I convert everything to regular srgb then the blender render is overexposed and oversaturated,

so what I usually do is convert the stock footages to srgb and blender render to fimic srgb separately then combine them in the end. that way I get the correct color. but the limitation is I can’t do any complex blending between them. because they are all in the srgb space now and all the srgb problems will appears if I try to do anything to them.

so what should I do here?

What does is mean? Are you using Fusion for compositing?

If you care about color precision, then never use filmic. Use standard with none “look” instead.

What kind of blend are you trying to do?

So anyway, in both Fusion and Blender Compositor, we can decide the color space of each footage. We have to select the correct one. And the final render will convert the result to the output color space. So there are input color space and output color space.

yes im using fusion

if i dont use filmic the dynamic range is low right?

blending like add overlay etc. also general composting effect like blur or glow (all of which will give artifacts in srgb like dark halo clipping highlight etc.)

yes, what I know is to do the compositing in linear space, so I converted every srgb asset to linear (pngs jpes etc,) then I can blend them with blender render with no issue. the problem is after the compositing i have to transform from linear space to srgb, but filmic srgb ocio comfig will make my srgb assets look washed out. so my question is how to properly do it?

No. Filmic will try to “normalize” the value to between 0 and 1. Standard will keep the value as what it should be, in compositing and programming. But in a sRGB viewport/monitor, any value higher than 1 will return in 1 and any value lower than 0 will return in 0.

I don’t remember the Fusion color management to be so complicated like that. You just need to select the proper color spaces for the general input, specific inputs and output, then the software will just do everything. That’s the case of the Resolve built-in Fusion. Not sure about the standalone.

so this means that I shouldn’t use filmic when exporting? or it will always export linear?
but if I don’t use filmic the viewport will look exeremly bad, how do I do proper lighting? if I don’t know exactly how it will look after compositing?

so In resolve if u use filmicsrgb as output the srgb asset like pngs wont appear washed out?

i might have just figured this out, when linearizing srgb file use a ociocolortransform node and set input as filmicsrbg, outbut to linear, instead of just removing the curve with gamut node.then the srgb file will get tone mapped to filmic setting.

i think this is the proper way to do it.

You might want to adjust the lighting again. If the final output is sRGB, and meant to be viewed as sRGB, then the artist should manually adjust the lightings and materials to be somewhere between 0 to 1 in the first place. Using filmic is not recommended.

Not sure but if filmic is designed to normalize value, then the assets might get “washed out” if there is any pixel exceed the value of 1 in the composition. Me and other people I know never use filmic in such way so I’m not sure.

Do you have to do that? Is there any document on if we have to manually linearize sRGB assets in Fusion composition? Is it a requirement in OCIO workflow or something?

You have two options.

  1. Encode to an sRGB file, then load and set the encoding to sRGB and composite.
  2. “Cheat”. The cheating method isn’t ideal because we don’t know the actual emission levels of your stock encoding. But you could in theory use a transform that sort of kind of sort of matches the look of the stock photography, pick the close-ish Filmic contrast, and then make a transform that is essentially a to_reference using the file encoding for the contrast, with a ColorSpaceTransform that goes from Filmic Log to Linear.

#1 will give you a composite relative to the light emissions from your display, where #2 will combine the light levels of your render with the simulated / cheated light levels of your stock photograph as though they were in the same sort of light level range of your render.

I’m sure someone here can mock up the stanza required for #2. The limitation is sadly in the stock encoding not properly giving you the required light ratio data.

I see you found the “cheat” method.

If you need a “better” cheated match, pick the contrast that loosely matches the stock photography, and then match the contrast file in the LUTs folder. It will look something like…

  - !<ColorSpace>
    name: Filmic sRGB
    family: display
    bitdepth: 32f
    description: |
      Cheat Stock Photography Base Contrast
    isdata: false
    allocation: lg2
    allocationvars: [-12.473931188, 12.526068812]
    to_reference: !<GroupTransform>
            - !<FileTransform> {src: filmic_to_0-70_1-03.spi1d, interpolation: linear, direction: inverse}
            - !<ColorSpaceTransform> {src: Filmic Log, dst: Linear}
    from_reference: !<GroupTransform>
            - !<ColorSpaceTransform> {src: Linear, dst: Filmic Log}
            - !<FileTransform> {src: filmic_to_0-70_1-03.spi1d, interpolation: linear}

Replace the relevant contrast, or do up one for each, on the spi1d line.

so there is no proper way to implement filmic in compositing workflow?

if I export srgb then I guess I’ll have to do all the color and lighting in post instead, everything in viewport will be just a rough estimation right? that’s a bit counterintuitive imo

also I don’t understand what are those codes, where should I put it?

if I have multiple asset that have vastly different look, then I guess I should match all the asset to the render then use the cheat method is that correct?

also I wonder if aces is going to fix this or it will have the exact same problem with filmic?

There is. This is not the issue you are facing.

Stock photography has permanently lost the light information required to properly apply image formation such as Filmic. This is not a software fault. This is the problem with the random stock photography.

Renders and any other information, including DSLR captures in the camera encoding, can all work properly. It is only random stock photography that cannot properly communicate light emission levels.

Stop using stock photography then. Find suitable imagery with proper light emissions that can be properly decoded. EXRs and dot HDR files typically will contain proper light emission ratios.

They live in the OpenColorIO configuration file located in release/datafiles/colormanagement. However, if this is alien to you, best advice is to avoid tweaking the files.

This is the problem with your stock photography. No one can help that.

This is not a problem with Filmic nor ACES nor anything. This is a fundamental misunderstanding as to what imagery ready for consumption is, and what light data ready for manipulation is.

TL;DR: Stock photography can not, and will never work properly in compositing.

got it,

someone suggested to me that if I use gain to scale down the render file to 0-1 and then do the compositing after that i won’t have any problem, do u think this is a viable solution?

Sometimes I kind of have to use some stock photo or video because i don’t have the time to make them myself. im interested in that better cheating method u showed me with those code. can u tell me a bit more about how to implement that? thanks

unrelated question:
so a stock photo will lose its light information permanently can I scale up srgb gamut from stock image to a wider gamut? does it work like that?

No. That’s dumb.

Ask yourself what you are trying to achieve and you’ll see why it’s a non-solution.

If you are forced to use broken assets, it is what it is. There’s nothing “better” than simulating / guessing / hacking the encoding to pretend it’s radiometric-like as per the discussed solutions.

The best cheat is outlined above. You’d have to learn how to edit the config.ocio file to add all of the contrasts. Back it up and experiment. Easy to restore it if you screw up.

By adding all of the contrasts, you can compare how a render looks to the encoding of the stock photo, and come up with the best hack guess.

The light volume is the issue. At 8 bits, it’s a horrible source and may even start falling apart if your hack guess is off by too much. If you get it close, it should barely hold up.

That is not really how it works, as the information is long gone and the bit depth is ridiculously reduced. It just is what it is. The hack guess workaround approach is about as good as it gets with garbage stock photography. It should fare slightly better than compositing it all in the display range. Or it might not! Depends on how things line up.