Correct the color offset happening to standard (SRGB) image inputs when Filmic color is enabled?

Filmic makes imputed SRGB images appear faded and desaturated, and I want to find a way to compensate for that color transform offset while compositing with filmic color enabled.

How can I make them look the same as they do when standard color is enabled, using compositing nodes?

Here’s a common example of why I must know; I’m using Blender’s compositor with filmic color enabled so that I have plenty of control over the color range. The client hands me some images that are SRGB, and asks me to mix them into the filmic rendered image, not a big deal as I won’t be tweaking the brightness as much as the render. The client NEEDS the SRGB inputs to look correct, but the filmic treatment of said images is washed out and dull and looks odd next to the somewhat more vibrant renders.

For various reasons there are many people asking this and many people replying tomes of info without actually answering the question. So I’m hoping that the following guidelines will make the difference of understanding.

This is what I DO know:

-Filmic looks ALMOST like standard but with flexible color ranges for heavy compositing.
-Using SRGB in compositing doesn’t make sense because the color range is tiny in comparison.
-People do it anyway and we have to adapt in order to survive as an artist.

What I NEED to know to make my deadline:

-The best compositing node to make it happen
-The values I’ll input into said node.

What I don’t need, but won’t mind hearing once I at least have the final render in progress:

What is the specific range of Filmic, and how does that compare to R, G, B, A.
At the end of the day, I’m trying to get something done, not write a research paper.

Right now, all I have is my srgb inputs coming into a gamma node with a setting of 1.363, which I’m assuming could be more accurate. People suggest the ASC-CDL setting as part of the color balance node, but I have no idea what exact setting will work because I don’t know the exact mathematical difference between standard and filmic transform.

Best answer I can imagine would have a chart involving map range conversion info to and from every popular colorspace and transform setting. But honestly, you could give me a digit followed by a decimal and a couple more digits and a place to put them and I would be perfectly happy if it worked.

-S

1 Like

You can’t, at least not in the same project. This would require a color space or LUT node to apply the Filmic LUTs prior to merging in the sRGB components, which Blender cannot do. It can only apply them as an output transform. I guess you could comp the render first (using filmic), save it to a 16bit TIFF, load that in a new project using the sRGB transfer curve as your output transform (that’s the regular sRGB mode)

Your problem originates in that your render is scene referred, but the graphics you’ve been handed to integrate are already display referred. Filmic is designed as a scene-referred to display-referred converter, so you have to just apply it first then comp in your display referred graphics

What is the specific range of Filmic, and how does that compare to R, G, B, A.

It captures values from 0 to ~16.3, see the chart on this link for the false-color view: https://sobotka.github.io/filmic-blender/

Anything above 16.3 is clipped to white, and colors will desaturate as channels approach the top of the range to prevent one channel from clipping before the others. This prevents the gross hue shifts you get with the plain sRGB transfer curve where one channel clips and the other 2 start “catching up”. For values below 0.5ish it behaves very similar to the sRGB transfer curve, which helps make it easy to learn if that’s what you’re familiar with. (Unlike, say the ACES sRGB ODT which darkens and boosts contrast compared to plain sRGB)

Best answer I can imagine would have a chart involving map range conversion info to and from every popular colorspace and transform setting

That’s not really possible to distill to a chart, hence why OpenColorIO exists.

1 Like

When you say it’s not possible I have to admit I find that difficult to believe, so please bare with me.

My understanding was that there is a fixed floor and ceiling of effect that filmic has in shifting the color channels of an SRGB input and that I simply have to figure out that one formula and reverse it exactly.

I’m not sure, but it sounds like you’re saying that the scene, camera and monitor weigh in on the effect that filmic has on SRGB images? So that per project, per scene, per camera and per monitors I have to have a different conversion formula?

Is this similar to the issue of Z-depth in the case that normalization shifts the overall position as well as the scale of the value ranges?

Do we have the same inconsistency converting between SRGB and other view transforms?

-S

Ok after digging a lot on my own I have a slightly better comprehension.

It’s similar to Linear because you have more colors to work with, and yet it’s applying the kind of adjustments that could be done in compositor (In theory). Namely, there’s a curve to push the values further towards SRGB like view space, and a de-saturation based on the power factor that is taking care of values blowing out due to the lighting power causing over-saturation. There’s more than, but that’s the gist I’m getting from watching videos explaining it. Every aspect of it thus far that I’ve learnt about it could surely be done during composition with linear color space enabled.

With linear, it’s seems to be more like a blank page where you can go in many more directions. You only have to gamma correct to 2.2 to fix RGB inputs and that’s always been the case, even when I was using Modo. I like to make my pipeline as simple as possible, and that’s what Filmic PROMISES, but in it’s current form, it does not deliver said promise. So working in linear is going to be smart for me for reasons of flexibility, compatibility, and speed.

If I can just figure out how to reproduce Filmic using Linear as a starting point, I might just be able to achieve my goal of seamlessly integrating SRGB colorspace with Filmic and ALSO have linear support by default.

I will keep researching this until I fully understand the filmic process.

Did some more research and uncovered more good news covered in bad news.

The good news is that the Blender team WANT to have a node that will convert between color spaces defined in the OpenColorIO configuration.

This is in the official development wiki:

“* The compositor should get a compositor node to convert between two color spaces defined in the OpenColorIO configuration.”

Annnd if we check out the ‘Open’ ColorIO Wiki you see what appears to be bad news:

“Caveot 2: We are not distributing the code that generates the luts from their underlying curve representations. While we hope to publish this eventually, at the current time this process relies on internal tools and we don’t have the spare development cycles to port this code to something suitable for distribution.”

I really hope that does not effect the speed of which this open issue is resolved:

https://developer.blender.org/T68926

And I have to wonder whether this includes conversion from linear to filmic.

Surely…?

-S

Go back to the client and try to explain how they can help to improve the end result. Ask them to also provide the raw photography or log footage and use that in the comp.

It will be a much better result than mangling an already mangled .jpg

First of all, linear isn’t a color space, it’s an attribute of a color space (having a linear transfer curve). Color spaces also have primary colors and a whitepoint, the space people commonly refer to as “Linear” is actually linear sRGB, ie the sRGB primary colors and whitepoint(D65), but with a linear transfer curve.

Second, Filmic isn’t a working space, it’s only an output transform. Your render and composite happens in Linear sRGB whether you use Filmic or not. Filmic is only applied when saving an image or displaying them to the screen. You don’t have more colors to work with compared to Linear sRGB because you’re still using Linear sRGB! The diffierence is this:

With the classic sRGB linear workflow, the output transform(the thing that gets run when saving/displaying an image) grabs values from 0-1.0, and applies the sRGB transfer curve.

With Filmic, everything up to displaying or saving the image is exactly the same. But when you go to save an image, instead of grabbing 0-1.0 values and applying the sRGB transfer curve, it grabs values from 0-16.3 and applies its own curve with fancy highlight rolloff and desaturation. The result is your highlights look a lot nicer.(hopefully)

So you can’t invert the Filmic LUT as an input transform and get your comp to work, because Filmic isn’t the working space. You’re approaching this problem as if it was (maybe unknowingly). Yeah, if you have a working space mismatch, you can apply some transformation to get your inputs into the working space so they can be merged with renders that were already in the working space. But since Filmic just changes the output transform and still works in Linear sRGB, this isn’t a relevant solution.

Your problem isn’t a working space mismatch, it’s that your sRGB graphics are being hit with a LUT (the Filmic output transform) and they did not need to be hit with, because they were already in an output space.

I’m not sure, but it sounds like you’re saying that the scene, camera and monitor weigh in on the effect that filmic has on SRGB images? So that per project, per scene, per camera and per monitors I have to have a different conversion formula?

No, and I’m not sure where you got that from my comment, but I guess I did write it at 23:30 when I was about to go to bed…

To Organic:

Part of being proffesional is knowing when not to force an issue.

Recall the term ‘clients from hell’. When I was a freelancer, my experience was that clients don’t care about technical details, and would sometimes lose interest immediatly, even if it was important.
The closest I ever encountered to clients understanding color mangement from the beginning was ONE guy wanted something printed in CMYK, and he was the head of a design agency.

At the same time they have a tendency of wanting to shape the product, while reducing the bottom line. You can ATTEMPT to educate a client, but there’s a limit in order for them to save face, and some people have short limits. Usually the client wants you because they don’t have ample time to learn.
If it comes down to losing the job, most artists know the correct way forward is to accept what can’t be helped. Best thing you can do sometimes is ‘flag’ the issue, to cover your butt.

The same is kinda true now that I’m a corperate artist. As it is, I have to chase down co-workers over bad practice that outweigh this. I see more mistakes on average because a senior generalist. In an office setting being ‘that guy’ doesn’t win promotions, you have to pick your battles, or people don’t like working with you.

I told my boss AND the lead technical that we shouldn’t be using SRGB inputs… If they understood, I’m not 100% sure, since I’m still handed SRGB inputs on a regular basis. Maybe they got it, but diddn’t care because of various reasons. I refuse to force the issue by entirely relying on that mindset.

Sad but true, but with all this in mind I hope you’ll be more willing to work alongside that assumption.

J_the_Ninja:

Thankyou for replying. And there’s some really helpful points there.

I’ll make sure I undestand in my own words; base value (0) stays the same while the whitepoint is scaled from 1 to 16.3 and this is to provide more headroom to accomodate higher contrasts that are less likely to flatten out. After this, the curve is applies to closely resemble realistic film capture conditions.

That sounds like something that could be done in compositor, am I wrong?
Isn’t this a job for the remapping node followed by a RGBA curve node?

I think I recall someone saying that the curve node was limited to 0-1 conversation, is this why people are asking for film base curve nodes?

-S

1 Like

. . . and yet you keep pounding away on it :smiley:
.

You have it backwards, and you’re missing some of the basic concepts. It’s a display transform. Filmic tonemaps scene referred render data your monitor can’t display to (display referred) data it can display.
https://blender.stackexchange.com/questions/46825/render-with-a-wider-dynamic-range-in-cycles-to-produce-photorealistic-looking-im

.

You are wrong.
.

It isn’t.

We can’t give you a list of numbers with nodes and where to put them. You can’t fix it by changing the brightness and contrast, though you’re welcome to try. The ASC CDL node is probably your best option.
The answer to your question is here -

1 Like

Thanks for sharing the article, I’ll read through the entire thing. I do appreciate your time finding that.

But I can see why so many of these discussions die. It’s frustrating talking to people who speak down to you, and even more frustrating when people assume your problems are non-existent.

“. . . and yet you keep pounding away on it :smiley:

Well, now you’re about to find why I’m done “pounding away” here.

A = fitting SRGB images into editing pipeline, B = Insisting that who ever I work with deliver me linear/filmic or nothing.

I asked for A, you gave me B because despite the fact that I have to deal with A on a regular basis, you think that my problem is solved by forcing B on everyone. I then asked you AGAIN for A, and you returned with B again but with more passive aggression. If that’s not pounding away at a dead issue, then my name is Ton Roosendaal.

So because you insist it’s your way or the high way, I have to take the high way unfortunately.

“You are wrong.
.”

It would be correct to state that you provided no explanation for that.

I’ll figure this out somewhere else.

-S

1 Like

Sounds like you should have spent more time reading as opposed to mashing keyboards.

What part of “It cannot be done” is hard to explain?

But carry on, feel free to keep trying to derive colour from a greyscale image encoding, because that’s exactly the analogy of what you are trying to accomplish here.

If you take some time to think it through the solution becomes obvious. Blender applies the display transform to the data as a last step before displaying it.

You would have to revert the transform before it is applied. To do that you would have to extend the range of your already compressed sRGB data from 1 to 16.3, but that data has been destroyed, so the solution to your problem A:‘I don’t have enough data’, is actually B:‘Get more data’.

1 Like