Colour Science (Was: Blender Filmic)

Moved the thread to avoid complexifying the original.

ACES was designed to be more optimized than XYZ, which it is. There is a lower volume of undefined colour regions, and as such, the encoded values end up representing more valid and meaningful colour combinations. This, in addition to the point I made above regarding manipulation and likely some other technical reasons, is why they opted away from XYZ I believe. This is why I stated “incorrect” to your original post; it fails on both encoding and manipulation fronts.

No disagreement that camera filters are not purely narrow band.

The filters are a specific colour, and while indeed they are not a physically implausible narrow band, they are indeed gamut restricted and subject to the much the same limitations of tristimulus models. That is why, assuming the sensor passes a linear threshold (Luther test I believe) they can be modelled quite well using the traditional tristimulus primary model, and the linear algebra associated with the math modelling thereof.

I don’t see much disagreement with Mr. Poynton here, and have exchanged emails with him as well as mutually follow each other on Twitter. I am sure he would comment for further information.

2 Likes

Yay, a color science thread :)!

I really appreciate your insightful responses troy_s. I’m exploring the limits of my current understanding here and find great value in people more knowledgeable than me either validating it or proving incorrect.

I’m starting to pay a lot more attention to these threads, including those that link to the science. Following the science means more realistic. Let’s give it all that’s out there.

Awesome to have this in depth thread here. Should be exciting. :smiley:

At this point, I think I have a basic grasp of what linear is, how/why it’s different from display referred, how primaries define a color gamut and some of the other aspects surrounding these concepts. I’ve been using the filmic tools in blender and have been very pleased with the results I’m getting, especially when it comes to compositing the results into live action shots.

I’m a bit lost on where to go next. Right now I just toggle some settings to get a basic idea of how everything in my scene looks, then handle the rest in comp in what feels like a very imprecise manner and I’d like to have more control.

I’d like to know more about how to create LUTs (how exactly they work, how to create them and how to make sure they’re doing what I want), how to do proper transforms to match one color space to another, and best practices for these operations. Where do I start?

This is a relatively easy one. Rather, this is a very deep hole that is easy to fall into to begin…

As kesonmis has pointed out many times, RGB is relative. That means a few things from “How do I transform my data?”

First, let’s revisit what the ISO 22028-1 Standard states as required for an RGB colour space:

  • Explicit definition of the RGB chromaticities. “What colour, using an absolute model, are our red, green, and blue lights?”
  • A clearly defined transfer function. “What is the intensity scale and range for the RGB data value?”
  • White point or achromatic colour. “What is the colour that will appear neutral, typically when red equals green equals blue?”

What the above tells us is that we can’t look at any RGB triplet and deduce the intensity scale, the colour of the lights referenced, nor the colour of the achromatic value from the data alone. The data values alone are utterly meaningless without further information.

So where to start? Know that you can’t apply a transform, via a LUT, formula, or any means, without knowing the above three pieces of information.

Once you know those three, that allows you to define the parameters of your input and output, and create a transformation.

So best advice is, as I frequently restate, to take baby steps. What would a baby step look like in this instance? I would strongly advise you to help yourself and likely a huge number of people in this forum, to begin with your own custom looks. That is, now that you know and use the CDL, learn how to bake a CDL transform into a custom look.

Start by taking a reasonably good mixed colour image render. Slug in a CDL node and grade it using a few tweaks of values. Now record those three numbers.

As you may or may not know, OCIO has built in support for the CDL! This means you should be able to open up the config.ocio file, add your own CDL transform with your own values, and then reuse that new look in any scene you wish! Because we already know that our data is REC.709 / sRGB coloured primaries direct from a render, we also know the white point is well defined D65, and we know we are in scene referred linear ratios, we can apply a simple CDL in precisely the same place our node is affecting the data and (hopefully) get a perfectly identical result… With a little blood, sweat, and tears likely due to a typo or a small bit of learning pain. :wink:

How might you begin this baby step? Assuming you have your three values, and a good TIFF of the image output for reference:

  • Open up the config.ocio file and duplicate the Filmic Log Encoding Base stanza from Filmic. The “stanza” is the entire indented definition. Rename the name to “TestingCDL”.
  • Find the proper syntax for a CDL transform via the OCIO documentation. Add the line as the first[/] entry in the children of the “from_reference” definition. Insert your own values.
  • Watch your whitespace as OCIO is [i]extremely fussy, and save it.

You should be able to load your EXR render, select the horrifically labeled “View as Render” option in the UV Image Editor, and apply your custom CDL modified view. It should be perfectly 1:1 with your original CDL node.

When you succeed, you are one step closer to defining your own custom look via a similar approach.

For additional knowledge, find out what happens:

  • If you place the transform in a different position in the transform chain?
  • If you feed the exact same image rendered as a display referred file such as an sRGB TIFF to the exact same view?

If any or all of this post is gobblygook to you, please post and I will do my best to offer further guidance or explanations. Good luck.

I was thinking the other day, is there some available application or web app or something like that, which would allow to describe spectral power distributions, transmission curves, response curves etc and do different operations with them?

It would be an interesting exercise to take, for example, the SPD of a known light source (say, sunlight), a reflectance curve for some material, the transmission curve of color filter array, sensitivity curve of sensor photosite and put them all together. And then top that off with a camera matrix + XYZ > sRGB matrix and see what comes out in the other end.

Some time ago I fiddled a bit with a dataset that contained a series of images taken with 10nm steps over the whole visible light spectra. I combined them together using the matrix for calculating XYZ values from the SPD, then converted the XYZ to sRGB to see if I get a match with a photo taken with ordinary dslr of the same scene. I got something pretty close and it made me happy as a puppy. The really interesting thing I also tried was unmixing the original images with the light source curve used for shooting the 10nm slices and replacing it with the curve of some other light source. It allowed to model the resulting image based on different lights and see how the final RGB image depended on different lighting.

https://colour.readthedocs.io/en/latest/colour.colorimetry.html#colour-matching-functions

http://gl.ict.usc.edu/Research/OptimalLED/

http://gl.ict.usc.edu/Research/MultiSpectral/

http://gl.ict.usc.edu/Research/MultispectralIBRL/

Been messing around with this for the past couple of days and I think I’ve got it down. Got to grips with the config.ocio file and how it works in Filmic.

Also tried applying the above steps to a Look rather than a Colorspace in the config. I don’t entirely grasp what it’s doing, but I’m getting closer to figuring it out I think. Going to try some more extreme settings with it to see what happens.

And just to make sure I’m understanding the order of operations here correctly, the CDL works on the scene linear data pre-display transform correct? Meaning that it operates on the RGB numbers before we even know what those RGB numbers mean as they relate to color space?

This is exactly why I suggested to learn how to get 1:1 with a View first. Looks are a special beast. The two things to focus on are the Process Space, and the reference. Looks have a unique facet in that they always return the transform chain to reference. That is, if you set the process space up to Foo, the Look will perform the transform in the order requested to get from the reference to Foo. Once transformed to Foo, the chain within the Look will be applied.

The important thing to realize is that after the Look chain, OCIO will return the data back to reference and then roll the data through the View.

So if you look at the base Filmic Looks, as given in the GitHub branch, you will see that the Process Space is set to be Filmic Log Encoding. This will apply the “from_reference” chain to the original scene referred data. After exiting that chain, the nonlinear contrast transform is applied. Finally, after completing the Look chain, OCIO will roll the data back through the “to_reference” chain to take the data back to scene referred linear, which technically has the nonlinear look baked into the ratios. That data is then rolled through the View, which would apply the “from_reference” chain again. Phew!

The CDL is blind. It was designed to crunch numbers that could be scene referred or display referred. What it does will always depend on where you place the transform. This is why I started this off by highlighting that the data is arbitrary; you have to keep tight control over your protocol in order to keep transforms meaningful. Flip one little detail and the whole transform chain falls apart.

So what then, slugging a CDL node after render, would we need to do to achieve a 1:1 output via a Look chain above? Remember that the data would be scene referred linear directly out of a render, so the CDL values must be applied accordingly in the look.

This is a critical thing to remember; the notion of colour primaries, white point, and linear or nonlinear nature of data is simply protocol. It is up to you to be aware of your data’s state, and properly assert the correct order of operations when creating transform chains.

So now the slightly tricky mystery of solving adding a CDL to a Look begins…

The notion of data state describes the situation nicely. I’d like to add to this that the state can NOT be deduced from the data itself, it is a metainformation that is either carried in metadata (for some file formats) or users head, but usually it is either incomplete or missing altogether and we have to take our best guess.

No kidding. Sounds really crazy, but I think I’ve got it now. Tried applying it and it seems to have worked perfectly. For clarity, let me see if I can map out what you said.

For View:
Reference –> from_reference transforms

Fairly straight forward, just takes the reference data and applies the specified transforms to that data.

For Look:
Reference –> process_space from_reference transforms –> Look transform –> process_space to_reference

In the Look chain, it uses the process_space to bring the Reference data into the specified View where the tweaks for the Look are then specified. Once done, it rolls the Look back to the reference such that the transforms of the process_space and the Look are applied to the Reference data. From there, the modified Reference data is then presented by the View.

Here’s how I tested this and reached the above conclusion:
To match my reference, I created a Look using Linear as the process_space and put my CDL values in the transform. Using this in combination with the Filmic View resulted in 1:1 match with my reference.

To test further to understand the order all of this is working in, I created a Look, using Linear as the process_space, and copied the transforms from the Filmic Log Encoding colorspace. When I use this Look, with the View set to Linear Raw, it matches exactly to the Filmic Log Encoding View with no Look. This tells me that everything in the Look (process_space and transforms) are rolled back into the reference space. The resulting reference space is then used for the View which applies its own transforms. In this case, using the Filmic view on top of this resulted in a very washed out image.

Is this correct? Want to make sure I’m understanding how this stuff is working rather than being wrong and finding out further down the line.

The process_space would need to be Linear. Placing the CDL in the Look transform would then apply it to the Linear scene data. When presented by the View, it then matches exactly to the results from before, as the process chain is essentially the same as having the CDL transform at the beginning of the View transform list with no Look.

I’m really starting to see how tricky these Looks can be. I’ll have to experiment with the premade ones to see what affect adding a CDL has, both before and after the LUTs that are in there now. I have thoughts about how this will go and how a CDL will affect the results, so experimenting will be fun to see if I’m actually understanding the concepts correctly. Crazy stuff, but learning how it all works is so much fun and very enlightening.

I believe you nailed it. The key to remember is that Looks go from the reference to the Process transform, through the prescribed Look chain, back to reference, then onto the View.

The only issue here is that you couldn’t pick the Filmic Log Encoding as a View and use the existing looks as well as your canned Look.

So a gentle nudge… Is there a way? Hint: Have a look at what Brecht did in the mainline 2.79 branch to get Filmic into the sludge of the default configuration.

There isn’t anything one can do to really hammer home colour transforms than to simply have someone discover how they work for themselves. It is both extremely educational, but also extremely powerful for someone working on a number of shots and / or look development in a larger project.

I would point out that you have reached a critical point in understanding transforms, and hopefully your knowledge can help others. In addition to this, I would highlight a critical aspect of transforms regarding “pre” and “post” view here that you should be able to understand clearly now.

Most agree that in order to achieve a reasonable view transform on work, it requires a convergence point that desaturates high values. The problem here is that no matter how hard you push a value, the output always comes back achromatic where R=G=B in the white point colour of the output.

So how then, would we grade a scorching hot sun for example, to have a delicate yellow-golden glow? If we push the yellow / golden values “pre” transform, we always yield achromatic. So what might the solution be? Could you demo such a thing using a simple CDL transform?

It is extremely rewarding, and gives one a much better feel of control over one’s work. Hopefully others will learn from your efforts and take up the torch themselves.

Now onto the questions herein…

PS: It might be beneficial to paste your stanzas for others to learn from.

I was reading the Blender Filmic thread today for news,and MartinZ asking for a method to get Albedo from photos.And i recalled a few pages i found interesting that goes in that direction.i think its the same method quixel use for her megascans.i saw a video from quixel for a while,in the vid they have used a ca 1m x 1m x 1m box ,with camera and LEDs for the lighting inside.here the links.dont miss the papers how albedo reflectance and normal maps was created.

http://gl.ict.usc.edu/Research/CPSIR/
http://gl.ict.usc.edu/Research/SpecScanning/
[http://vgl.ict.usc.edu/

<a href=“http://vgl.ict.usc.edu/” target="_blank">
https://youtu.be/8alYZgkwClM
eee](http://vgl.ict.usc.edu/)
edit,found the quixel video

and btt. if you know how to make custom LUTs,is there a chance to make a LUT converter to read LUTs from Magic bullet or Resolve or Final Cut ect ? would be realy nice to have if this is possible,isnt?

edit,found this Arri LUT Generator,maybe useful?

https://www.arri.com/camera/alexa/tools/lut_generator/lut_generator/

Gotcha, that makes sense.

I think I’m starting to see what you’re talking about, so you can tell me if I’m going in the right direction.
In the 2.79 color management, there are 2 Filmic views of interest, the Log and sRGB views. Log is the same as standard Filmic, but sRGB adopts the LUT that Filmic used for the Base Contrast Look (I’ll just call this BaseLUT). The BaseLUT is used in Filmic sRGB after a transform from Linear to Filmic Log. So Filmic sRGB becomes the new Filmic Log w/Base Contrast.

As a result, the Base Contrast Look no longer does anything, as that is taken care of in the Filmic sRGB view. And because of this change, all other Looks must change as well. Instead of simply using 1 LUT, they must now include the BaseLUT inverse. As I understand, this subtracts (pardon the terminology) the effect of the BaseLUT from each of the other contrast LUTs to get the results closer to the linear reference. So the Looks will not appear the same as they currently do with Filmic Log until they are viewed with Filmic sRGB. Am I reading that correctly?

To answer the question you posed, I think yes, but it can be messy. One way that might work (haven’t had time to test yet, but I plan to) would be to have the process_space of the Look contain the CDL. So I would make a new color space that has my CDL values, and use that as the process_space for the Looks. That would preserve the ability to use the Filmic Log space as well as the same looks as before that have been changed to use the new process_space. Is that a sound theory?

Agreed. Trying out these little exercises has helped so much to understand what I’ve spent much longer reading about.

Hmm, I have some ideas of how this sun example might work, but I want to try them out before commenting about it. Hopefully I can find some time soon, as it is sure to be very interesting.

Here is what I did to the Filmic Log Encoding color space at the start of my experiments to match my reference image.

  - !&lt;ColorSpace&gt;
    name: Testing CDL
    family:
    equalitygroup:
    bitdepth: 32f
    description: |
      CDL testing view
    isdata: false
    allocation: lg2
    allocationvars: [-12.473931188, 12.526068812]
    from_reference: !&lt;GroupTransform&gt;
        children:
            !&lt;CDLTransform&gt; {slope: [1.166, 1.166, 1.166]}
            !&lt;CDLTransform&gt; {offset: [0.00, 0.02, 0.07]}
            !&lt;CDLTransform&gt; {power: [2, 2, 2]}
            !&lt;AllocationTransform&gt; {allocation: lg2, vars: [-12.473931188, 12.526068812]}
            !&lt;FileTransform&gt; {src: desat65cube.spi3d, interpolation: best}
            !&lt;AllocationTransform&gt; {allocation: uniform, vars: [0, 0.66]}
    to_reference: !&lt;AllocationTransform&gt; {allocation: lg2, vars: [-12.473931188, 4.026068812], direction: inverse}

This is my followup Look (referenced in post #11) using the same CDL values. I did a second version later adding the default transforms from the Filmic Log Encoding color space to test how the chain worked.

  - !&lt;Look&gt;
    name: Testing CDL
    process_space: Linear
    transform: !&lt;GroupTransform&gt;
        children:
            !&lt;CDLTransform&gt; {slope: [1.166, 1.166, 1.166]}
            !&lt;CDLTransform&gt; {offset: [0.00, 0.02, 0.07]}
            !&lt;CDLTransform&gt; {power: [2, 2, 2]}

Note: the above 2 code snippets are from memory since I’m not at the computer these files are on currently. I’ll update them later if needed.

Hmm, I have some ideas of how this sun example might work, but I want to try them out before commenting about it. Hopefully I can find some time soon, as it is sure to be very interesting.[/QUOTE]

Going back to this, I’ve had a bit of time to experiment. It seems clear to me that, for doing this example golden sun glow, the CDL transform must be post view transform. So I messed around a bit with a CDL to see which direction I needed to push the values, then applied those values post view transform in the OCIO config. Here is the resulting stanza:

  - !&lt;ColorSpace&gt;
    name: CDL SunFix Test
    family:
    equalitygroup:
    bitdepth: 32f
    description: |
      CDL transform test.
    isdata: false
    allocation: lg2
    allocationvars: [-12.473931189, 12.526068813]
    from_reference: !&lt;GroupTransform&gt;
        children:
            - !&lt;AllocationTransform&gt; {allocation: lg2, vars: [-12.473931189, 12.526068813]}
            - !&lt;FileTransform&gt; {src: desat65cube.spi3d, interpolation: best}
            - !&lt;AllocationTransform&gt; {allocation: uniform, vars: [0, 0.67]}
            - !&lt;CDLTransform&gt; {slope: [1.200, 1.166, 1.000]}
            - !&lt;CDLTransform&gt; {offset: [0.00, 0.00, 0.00]}
            - !&lt;CDLTransform&gt; {power: [1, 1, 1.5]}
    to_reference: !&lt;AllocationTransform&gt; {allocation: lg2, vars: [-12.473931189, 4.026068813], direction: inverse}

And here is a comparison of the results of this on the left with standard Filmic log on the right:


Thinking on this, with a correct view transform, there is no way to get these results when performing pre view transform operations without seriously ruining the scene referred data correct? This sort of highlight coloring can only be done post view transform?

Yes, you would reverse the nonlinear transform in the chosen View such that when it is rolled through the View, it ends up being negated.

I believe you can combine those keys into a single CDLTransform using commas.

Well done. You solved it.

This is an interesting point. Once you realize that the contrast is in fact scene referred when baked into the data, the ratios are still legitimate. That is, if you were to roll the values back into the scene referred domain, even after a post-View transform grade to achieve a similar effect as you have here, the values hold up in terms of physically plausible intensities.

Why does this matter? With the advent of HDR capable displays everywhere these days, and consumer grade monitors soon, it is important that all adjustments be in the scene referred domain. This makes it absolutely trivial to transform the final data into the Dolby PQ encoding that all HDR10 and Dolby Vision requires.

Further still, it’s cleaner from a grading standpoint; try sorting out all of the labyrinthine transforms when folks apply them willy nilly all over the map, versus the ACES approach where there is a specific “slot” for look modifications.

Also note that it is impossible to push an intensity value towards a colour under a desaturation transform.

Great work. It is darn excellent to see these sorts of growth after the introduction of Filmic, and was part of the reason I tried to get it out there into the wild. Now you should be able to help others and share custom transforms around.

Finally, there’s one final aspect of OCIO that is worthwhile to experiment with now that you are adept with custom transforms. That is, you can use environment variables to define both SEQ and SHOT that are dynamic within the configuration. This allows you to, at the tail end of your work and heading into a grade, have custom transforms loaded per sequence or shot. Additional information located here.

Terrific work. I’d encourage you to post your demonstrations as a Gist at GitHub or somewhere. It might help a tremendous number of people too frightened to attempt what you have been experimenting with.

The stuff about manually writing in CDL adjustments for things like that sun image makes me wish for a way to tweak it during the render via a visual element in the color management panel (it’s not very accessible if you have to write code in an external text editor and then figure out the correct way to add it to Blender’s color management folder).

If that would happen, then the abilities of the filmic transform would skyrocket without the need for post-processing or using curves (which I know Troy says is broken, but I don’t see any other way of visually making render-time changes to get something like that image).

For starters, it isn’t code. It is basic transcription of values.

Second, given that it is the basis of reusing looks and integration on a per-shot, per-sequence basis, utilising environment variables, we can assume the complexity would exceed the needs of someone seeking a simple one-off aesthetic tweak.

Anyone using curves is likely well below the needs of a post view transform, nor even aware as to why it is required in edge cases; curves are so utterly broken as to be worthless and if the individuals cannot see why, no degree of higher colour control is going to help them.

No one needs to use Filmic. They can easily ignore everything and carry on happily doing whatever they wish. In fact, I would encourage folks that aren’t interested in the mechanics of RGB to avoid Filmic altogether.

I will state it again loud and clear: Filmic does nothing regarding the underlying concepts that everyone is already faced with every single minute they use Blender. Zero. Zilch. Zip. The sole thing Filmic does is exacerbate the already present issues, and bring certain concepts to the foreground.

So given that so few people bother to dive into the concepts as folks like Mandalorian and a handful of others have, it simply is a non-issue. In fact, I am dead set against even attempting to communicate the issues to people that have no interest.

There will be less than zero progress on any front until a majority of the culture sees the issues and understand them enough to understand potential solutions.

1 Like

You talk about developing a culture, but here’s the thing.

There are large numbers of people trying to get into that culture and trying to make it grow. However, it doesn’t help it in the least if the creator responds in a confrontational or even in an aggressive way.

Want to grow the culture, then stop ignoring or being dismissive towards suggestions and stop with the concept of holding features hostage because of ‘ignorance in the user base’. People are trying to understand, people are trying to make it better, the best way to make things better in the long run is to listen and have a good dialogue on making the color pipeline better and easier.

I’ve been using Filmic for a few scenes now, and already I have noted that it’s far more effective at compressing highlights and exposing flaws in materials than the standard sRGB transform, I just want to see it continually improved (hence my idea of having a CDL panel in the color management settings so those broken curves can become a thing of the past).

I think that a lot of people are interested in Filmic, but I think the threads scared them. Mind people come from all kind of directions and mostly never really bothered about color management, not because it’s not interesting, but because all they know is about contrast, brightness, curves, levels, and probably they thought that it’s all there is need to know. People that heard about Filmic, probably visited the threads, but for most it’s far above their head. I believe most are still interested, but put their effort to understand it on a lower level for a while, waiting until someone come with PRACTICAL information rather than advanced theoretical info.

Let’s say someone has the question “Should I apply white balance correction before or after the CDL”, and the answer is very theoretical. Yes, then only people like we see in this thread can keep their interest (in these threads). They will be always Interested in Filmic… why not.

Once I understand more of Filmic, I would like to make a thread “Filmic for Noobs”. That will be a thread with practical steps. For deeper theoretical background, they can go here.

What I mean; Filmic is so good, that it should be accessible for everyone. If you want to lean how to drive a car, you don’t need to know necessarily the quantum mechanics of how the photons of the alloy interact… with…etc.

Here some burning questions of mine ( I was afraid to ask).
In blender I have Filmic, and use it as my view. In some other applications I don’t see it, Could I use Slog instead, or another log after having it imported via an EXR file. And is that much much different than Filmic? And How do I use the slope of the CDL, I just bump it up until my eyes are pleased, or is there a guideline for it? You put the CDL before the Colorspace transform (linear to any kind of log?)
And after the Linear2Log transform, you can then play with the screen referred display functions like white-balance, levels). ?