Colour Science (Was: Blender Filmic)

This thread was opened with a specific focus on some of the more “spooky” concepts that many folks might shy away from. Frankly, if someone gets spooked out over threads, then it is quite likely that they should stick to defaults and have fun with it.

Again though, the concepts that Filmic leverages are already present in Blender. The issues with curves, blend modes, etc. have all been there for a long time.

Hopefully Filmic helps folks to engage with the concepts in a more tangible way, and as a result, elevate their understanding through experimentation.

Anyone that equates scene referred versus display referred, how view transforms interact with the data, why using a view is solid foundational knowledge, etc. with quantum mechanics isn’t going to make it very far in imaging. That is, the concepts aren’t nearly as complex as quantum mechanics. There are some questions that pop up that dive deeper into subject matter, but even then, if built atop of solid core understanding, I am confident there isn’t a single person in this entire forum that wouldn’t be able to understand them.

The core low level fundamentals are all exceptionally clear and exceptionally simple to grasp. The issue is, in my experience, accumulated cruft with broken mental models from years of usage in alternate software / mental models. That’s it! If one manages to tear down the broken ideas, the rest comes very easily.

Don’t ever be afraid to ask.

Some applications are crap. That’s the bottom line. Photoshop is one serious archaic piece of imaging software and it is always going to be a nightmare with regards to (ironically) scene referred photographic images. On the other hand, Resolve is an extremely capable application, and should be relatively easy to integrate with view transforms such as Filmic or others.

No. There are an infinite number of logs out there, and each requires very unique transforms to bend values back into the scene referred domain correctly. Their encoded state is unique to their particular needs, and there is no way to assert that any particular log will align values nor the range of values required. In particular, there are three SLogs, and their ranges don’t align with FIlmic’s ranges.

In your case however, you already have an image with typical scene referred light levels in an EXR. That means you are good to go in terms of the data, and all that you require is a means to apply the view transform in the other application. Without knowing more about the application in question, I can’t offer you too much help. For Agent, the colorist used Resolve and was having issues with the EXR pipeline. As a result, I suggested to use TIFFs at 16 bit, and handed them a properly formatted LUT for use in Resolve that delivered a perfect 1:1 off of the Filmic Base Log Encoding. This worked out well for the contexts at hand.

TL; DR: I would need to know more specifics about the application you are attempting to utilise Filmic within. Some smaller studios have adapted the OCIO configuration for use in VRay, Nuke, and Houdini to name a few.

Yes it is. Not at the higher level, but at the lower level of the encoded values. I formatted the original layout to be a base log encode with pairings of the contrasts. This permits a pixel pusher to use a variety of different applications, including those that are strictly display referred.

That pairing of the transforms however, means that the encoded values from the log, when rolled through the desaturation and contrast, won’t match what someone sees in Blender or some other OCIO enabled application. The variance could be quite extreme.

There is a pretty solid answer over at the BSE, that was largely a gathering up of information on the part of Cegaton. I’d encourage you to read it as it has quite a bit of information in it that you will likely find very useful. While it is technically under my handle, that answer couldn’t have been possible without Cegaton.

The TL;DR version here is that slope is a basic multiplier. That is, if you think of scene referred exposure values as being multipliers, you can quickly see that Slope, when used alone, is in fact perfectly analogous to exposure! Try matching up some values with the exposure setting in the Color Management or Film panel, and test the results. You should be able to get 1:1 values with both Slope or exposure.

How you use it is up to you. It is certainly handy via the False Colour view to set exposure values in data, or one could use the FIlm panel to bake the results into the actual values. When used in conjunction with the Power and Offset controls however, it is a much more powerful tool.

Great question.

In fact, the above discussion about using the CDL in an OCIO configuration is specific to certain contexts. You can quickly and easily use the CDL within Blender via the Color Balance node.

Note that within Blender, it is fundamentally impossible to apply the CDL onto anything but the scene referred data. It is sadly a bit of an oversight regarding needs. One can certainly try to mangle up the scene referred data and apply some horrible combination of math nodes, a power function, etc. and then apply the CDL, but that is no replacement for having a perfect 1:1 via a colour transform node. The priority should be an OpenColorIO transform node to solve every issue immediately, with longer term goals featuring tighter UI integration.

Deeper into your question, which is an excellent one and I hope more folks get to the point where the question makes sense, I would highly encourage you to read the above example of creating a yellow cast on an image. It will reveal how the order of operations has a significant impact on how transforms impact imagery in their particular states.

Always remember that transforms are radically different based on their order.

You could, but by and large that would be rather self-defeating on a number of levels.

Scene referred data is by far the most ideal and forward looking approach to manipulations. Trying to stay there for as long as possible is a prudent suggestion. It will keep things much simpler for moving between applications, and certainly make mastering and encoding to final deliveries much simpler.

There is one particular edge case where one would require a quick trip into the post-view transformed data state, but even then, that can be applied on scene referred ratios. That edge case is outlined above, and sadly there is no easy way to achieve it without an OCIO node in Blender. Hopefully more folks with solid questions such as these will help to put the pressure on it as a significantly needed feature.

Great stuff. Keep those questions flowing. You help not only myself, but also plenty of other folks.

In my opinion, this would be the way to really improve on the initial filmic implementation (and finally allow the broken features to be thrown away).

  • A CDL panel in the color management tab that allows for post-scene color transforms while the render is going (I looked at the LUT files and found you would need to download some external program to make a custom one, you really can’t expect artists to do that if all they want or need is brighter highlights and less saturated blacks).
  • Controls to tweak the saturation curve like making it more consistent between whites and blacks (the current implementation significantly desaturates the whites and saturates the blacks for instance, even when the albedo values of the materials get changed to be more realistic).
  • That adaptive color picker idea (or at least a picker that can more accurately show filmic-like color values under a standard light level if possible).
  • Application of Filmic principles in all areas of Blender and not just the render engines (the color picker as mentioned for starters, but also in other places that use color).

If these four things were to be in place, it would ease the workflow and allow Troy_S’ culture to really grow.

Troy im wonder if you are in talk with the devs about the broken mental models.i think the most user here are artist or hobby user,so if you have so much acknowlege about this color science stuff,i guess talking with devs about broken things and ideas for better solutions are the way to go ?
i mean ,how can we user know what are broken ? i guess we, or the most cant. all the nodes and math is calculated under the hood.only devs can know whats wrong,or someone with coding knowledge who looks in the sourcecode and additionaly knows the color related stuff,or at least find some bugs behavior that came to the bugtracker.

i have a question about LUTs? is it possible to convert any free LUTs from Resolve for example to Blender Filmic LUT like the old CM Filmlooks but with the advantic of Blender Filmic ? Anyway thanks again for Filmic ,its very usefull especally with very high dynamic light ranges vs the old CM.

edit,or what about a second LUT-slot ? ie first slot use for basic contrast,and second slot for add filmlook/style whatever is possible?

As outlined in other portions of this thread, no. Transforms must be designed to work with specific ranges, with specific value encoded points, and specific primaries etc. Can you arbitrarily apply some random transform? Possibly. Will it achieve anything more than randomly generating CDL values? No.

See above. If you practice working with the CDL and record the values for a look you like and would like to re-use, I am reasonably certain someone like Mandalorian or someone else can give you the assistance to convert it into a custom look as outlined previously in this thread.

Cool, that’s very good to know. Thanks.

Perfect! I kept meaning to ask about that but forgot when it came time to post. Just tried it and it works perfectly. Thanks.

For reference, here is the updated stanza taking this into account:

- !<ColorSpace>
    name: CDL SunFix Test
    family:
    equalitygroup:
    bitdepth: 32f
    description: |
        CDL transform test.
    isdata: false
    allocation: lg2
    allocationvars: [-12.473931188, 12.526068812]
    from_reference: !<GroupTransform>
        children:
            - !<AllocationTransform> (allocation: lg2, vars: [-12.473931188, 12.526068812]}
            - !<FileTransform> {src: desat65cube.spi3d, interpolation: best}
            - !<AllocationTransform> {allocation: uniform, vars: [0, 0.66]}
            - !<CDLTransform> {slope: [1.200, 1.166, 1.000], offset: [0.00, 0.00, 0.00], power: [1, 1, 1.5]}
    to_reference: !<AllocationTransform> {allocation: lg2, vars: [-12.473931188, 4.026028812], direction: inverse}

Took a few reads, but this paragraph finally clicked. Seems like a very important point to understand and is really cool that it’s possible to do things that way.

Good points. I really want to adopt an ACES workflow for that very reason. When working for a small vfx studio and with others on some freelance projects, color spaces and transforms have been a big problem that no one really knew how to solve, not even the ‘colorist’ we worked with, so this sort of information is wonderful to have now.

I really appreciate you doing the work to get it out there. The early versions of Filmic came out just when I needed it on a freelance project, and prompted me to really start learning what color is all about. This trip has been great thus far, and very enlightening. Excited to keep going. And I do hope others, such as those asking other great questions in this thread, take the dive and start messing around with this stuff. Trying things out has helped me learn so much faster than simply reading what other people have tried.

Awesome, I will look into this. It would be perfect for some of the things I’m trying to do right now.

When I get the chance I’ll do something like that and post a link back here. Would be awesome to help others cause this is a really fun road to travel down.

Speaking of which, what’s the next stop on the road? My current end goal is to be able to take some raw camera files through to scene linear (ACES I think is what I’m aiming for, since they have IDTs?), and from there create transforms for both the camera files and my 3d renders so they match perfectly color wise. So I think what I’m looking for is how to go about making those custom transforms once I know the primaries and white point. If so, where would I start, and if not what else should I focus on right now?

And thanks for walking me through all this. It’s very enjoyable and informative.

With what you know now you can craft your own custom transforms to decode any footage correctly.

Essentially you have to take the transfer function of the encoded footage and decode it, then convert the primaries to your destination reference. I can step you through how to do this with Filmic if you are wanting to get your sea legs and feel less like you are just mashing buttons when / if you flip to ACES.

Given you already have experience with scene referred work and Filmic, it may be the least cognitive burden to simply stick with Filmic for the testing.

To attempt this, you would need encoded footage in still format ideally. If it is a codec, it would need to be decoded “as is” to a still format. Then identify how the camera encoded the footage. This would be focusing on transfer curve (EG SLog1, SLog2, SLog3, LogC, etc.) and colour primaries (EG SGamut, Arri Wide Gamut, etc.)

Ok thanks Troy,i have read on the opencolorio site,its interesting stuff,but a bit confusing what curves are used and how they are build (i guess with tools ,like the FCurve Editor from SPImageworks)

I have to agree with Ace Dragon ,a better colorpicker would be neat.maybe with a mode selector ie ,linear mode,display ref mode and maybe srgb to linear mode ect.

Sounds like a plan to me. Having some guidance for this would be great, and I’ll see if I can figure out some of the steps for this before I come back.

Makes sense. Is there an image online that would be good for this test that others who come here could use (maybe one from the ACES github?), or should I use one of my own for this?

Starting with your own, assuming you know all of the details, is probably most ideal as you will be able to see every single step through then.

This would be great to see an example of the process, please keep posting about it!

An example of this process would be great. Please keep posting!

I’ve been reading over as much as I can, still digesting a lot. I’m amazed at Troy’s patience. Thanks for educating and contributing, it certainly goes beyond Blender alone!

This discussion, particularly with ACES, raised a point that never occurred to me before: That not all scene-referred (and linear) color space is the same.

When we talk about the scene-referred domain, ACES has a gamut that allows (beyond) all visible colors, whereas the sRGB gamut is limited by design (even when talking about scene-referred linear colors). Is that true?

So, let’s say we intending to render for and HDR display, or wide-gamut, and/or integrate CG from Blender with footage from a different colorspace. We’d want to render/comp (e.g. do all adjustments) in a scene-referred domain that is consistent and ACES seems to solve this problem. For simplicity, let’s worry about the RRT/ODT and display device later. How do I tell Blender that my scene-referred domain is ACES?

I can load or modify the OpenIO provided ACES config in Blender, but I’d like to make sure I understand the theory first.

I think more insight to my question is provided hereand here
Still reading, but it seems that (unless you’re using a spectral renderer), the renderer will “blindly” multiply RGB values. Therefore it doesn’t particularly matter what color space your scene-refereed colors are: the renderer isn’t aware of it as a real world color, only R,G, and B.

It would be up to the user to make sure they are providing ACES (or whichever color space) RGB values to the renderer and transforming the results correctly for display. When we provide or read RGB data to and from the renderer, it’s super important that we keep track of the color space. Particularly for wide-gamut or color space conversions.

To expand on my question from earlier. For any input color, let’s say an OpenEXR texture, I would make sure the color space (gamut, whitepoint) is ACES as well as my corresponding RRT/ODT.

The mistake I would have made previously would be to assume all linear EXRs would automatically work correctly: but that’s not true. Color space still matters in a linear EXR. I would have to be sure to author the EXR in the ACES color space before rendering.

Haha, did I just learn something, or is that all wrong?

The problem with using wide gamuts for rendering is that each gamut has its own RGB triplet that describes the same color. For example a pixel value of 0.9, 0.2, 0.2 in sRGB is maybe only 0.5, 0.1, 0.1 in some wide gamut. Now, when you cast a light with strength 1.0, 1.0, 1.0 on an object with this diffuse color, the primary bounce will have original RGB value, which, when transformed to your final view gamut comes through as the same color. But all secondary bounces are the product of self-multiplication and proportions between color channels and overall brightness level will be different.

For example, second bounce between two objects side by side will have RGB value:

sRGB: 0.9, 0.2, 0.2. × 0.9, 0.2, 0.2 = 0.81, 0.04, 0.04
wide gamut: 0.5, 0.1, 0.1 × 0.5, 0.1, 0.1 = 0.25, 0.01, 0.01

In case of wide gamut, color becomes more saturated and less bright than sRGB version with each bounce and result, when converted back to view space, will be different, although we started from the same color in both sRGB and wide gamut.

The main question here is, if the difference is objectionable. If not, no problem. But if it is, bad luck. The reason sRGB seems to work is pure coincidence I believe, there is nothing special in sRGB gamut in this regard.

You are correct that using such a wide Gamut presents big problems for calculating light interactions.

ACES (AP0) is really intended for archiving footage, as mentioned earlier.

The better solution seems to be open to discussion. Perhaps using rec-2020 primaries.

EDIT: ACES AP0 is best for archive. The AP1 primaries or ACEScg colorspace is a possible solution.

Correct. The renderer simply sees three intensities of light for the large part. There are some edge cases where an absolute color needs to be calculated to achieve certain results, but by and large this is a correct inference.

Correct. This is the role OCIO serves and automates to an extent via additional input from the operator and / or inference based on filename, metadata, etc.

Only if you were using ACES AP1. You would want to align the reference space values. If your camera shot on Camera FooBar primaries, at 5000k white point, and your reference was REC.709 primaries at D65, you would need to adapt your primary lights from Camera FooBar at 5000k to REC.709 at D65. Remember that Filmic currently uses REC.709 rendering lights. The reason was that it served as an introduction to the concepts, and changing rendering reference lights might have been too large of a cognitive leap to serve the purpose.

Indeed you learned something.

I would add that we should always remember that “wide gamut” isn’t a singular thing; there are many, many gamut volumes larger than our base REC.709 coloured lights. Further, remember that any gamut different from REC.709 isn’t simply more saturated yet the same colour hue, but the actual colour of the basis primaries are radically different in most cases.

Not sure what you see a coincidence to here.

In terms of whether or not the difference in the resultant mixed colours is objectionable, the results speak for themselves. Have a peek at the Steve Agland demonstration to see just how objectionable it can be. For this reason, ACES defines two sets of primaries. AP0 for archival, and AP1 for manipulation. REC.2020 also yields decent results. Thomas Mansencal did a quick comparison on the various existing colour spaces out there as well.

I agree completely. If or when I need to incorporate footage from a different colorspace, I’ve now learned a lot more about how and why that needs to happen, looking closer at OCIO configs is much less intimidating now.

My original question was delayed (I’m new here), but you guessed correctly and that answers my question.

Still curious why the google-group mailing list discussion and other articles don’t mention ACES AP0 vs AP1 and instead seem to write off ACES for CG entirely. I think I’ve got more reading to do.

Once you wrap your head around scene referred versus display referred encodings, you get so much for “free” when mixing footage and CGI. Much like using an HDRI to illuminate your scene and blending it with the background scene referred plate.

This is in why log encodings are so important; it is the secret sauce that permits one to decode the footage back to the scene referred domain.

Because ACES has been an evolving standard. When it was originally conceived, it was an archival / interchange protocol. Eventually, that came to cover grading and other forms of manipulation and rendering. At that point, the primaries became an issue and an additional set were developed within the ACES paradigm.

AP0 is simply the archival / interchange set, which covers the entire spectral locus, while AP1 are the tuned set for rendering, manipulation, and grading.

Makes sense. I’m sure I’ll have some blender-specific questions soon.

Thanks again for all the help, your plan is working. I used to think “linear workflow” and going to/from the sRGB Transfer Function was the whole story. Then I started working in VFX and learned about log spaces and still always wondered why it was so hard to get to scene-linear in compositing. (Answer: because the camera’s scene linear color space and CG’s scene linear were not the same).

I’m not currently working with camera footage (but do have to be aware of authoring HDR content) and reading through the 10+ pages and various articles has been well worth it. Haha, now I think it’s time to go re-calibrate and profile my monitors

As a general rule of thumb, think of it as alignment; aligning the input to the magic cauldron mixing pot known as the reference space. In particular:

  • What is the transfer function? Does the transfer function cover nonlinear to display linear? Scene linear?
  • What are the primaries? What is the colour of the reference space primaries? What is the colour of the input primaries? Are they the same?
  • What is the achromatic colour? As above, is the reference space white / achromatic axis the same as the input?
  • What is the intensity anchor point? Is the footage scaled to the same middle grey point as the reference?