Feedback / Development: Filmic, Baby Step to a V2?

Hey folks…

It’s been around a decade since the original tests behind Filmic were done, and given that several folks have been asking for a revised version, I thought I’d sink a bit of time into trying to revise things.

This is a configuration to test, and stress test. The design goals are:

  1. Not break things too badly, and provide a more or less “seamless” upgrade path.
  2. Handle extremely challenging albedo / lighting mixtures more gracefully.
  3. Maintain general principles behind the original so that assets and other pre-existing things need not be modified.

So consider this is a baby step toward what could be a V2. Maybe.

What I need though is breakage aspects. For example, OpenColorIO V2 finally added aliases, so proper naming can be used without exploding legacy files. I need a list of the colourspace names that are tripping things up to see if they can be rectified with some aliases.

At any rate, feel free to hit me up with a DM or such.


Filmic There Be Dragons Edition.


Just mentioned this in the Spectral thread on devtalk, I think I should bring the discussion here instead.

The alias seems to work for some files, but not all:

And the classroom scene seems it doesn’t work at all:

1 Like

Then when you hit rendered view and move your mouse across the node editor, it changes:

It uses sRGB for all, so maybe just falling back to the role for byte image. So alias failed here.

Here is the file:

This is great to know. Thank you for investing time.

I’m wondering if you can correct the transform in the nodes and check the result. What transform is it failing for? Is it impacting the output visually?


Not sure what that means but here is comparison after I correct the nodes manually:

I guess the woods look a bit dimmer?

Not sure what this means. It only falls to defualt role if the “input colorspace” field on the node is empty. I am not sure why but the classroom scene does not have that behavior

Also the View Transform falls to sRGB OETF by default across all scenes.

It would suggest that textures aren’t finding their appropriate descriptors for colour assignment. It would be good to know what those are from the command line perhaps.

That’s fine, as Blender’s configuration is a mess, and this is an easy change for the purposes of testing. It’s not so easy changing hundreds of textures so that’s one thing that I’d like to know if it can be solved with aliases.

This can be expected as the nature of the earlier and latter approaches will distort the mixtures of tristimulus values differently. Some degree of differences are somewhat tunable, but they come with a price.

The current attempt tries to work toward a smoothness of tonality, without abrupt sharp transitions. It is also an attempt to yield lower mixture distortions. They will still be present but it is a step toward “better” handling in a short term, without solving larger and complex problems.

For example, it should be easy for anyone to visualize just how distorted the results are by placing a sphere with a plane in the scene, with an albedo of 0.18, 0.18, 0.18 achromatic, and illuminating the sphere via an indirect bounce off of the plane. Changing the colour of the light, and keeping the plane and the indirectly illuminated sphere close to the plane helps to exacerbate the problems.

What should be expected is that the emission colour of the lamp holds a degree of the “sense” of the mixture it is set to, without collapsing to the familiar digital gaudy cyan, magenta, and yellow. It should also be expected that pure primaries will also gracefully attenuate.

That sort of a test, while spinning the “hue” of the colour picker, is a very rapid way for folks to visualize and test the problems.


Here is the command line messages of my fabric cube scene:

Here is the classroom scene:

Not so surprising I would say, since the two mostly used texture colorspaces in Blender have been sRGB and non-Color.

Right, I understand. I only said the “dimmer” for your question of " Is it impacting the output visually?" and I am not sure whether it is the display transform or the texture space transform. Now comfirmed it’s just a by product of changing the display transform

1 Like

Excellent. Thank you so much.

I’m just working on a multi-distribution quick testing version that will incorporate aliases. I missed Non-Color, so thank you for investing time in this.

Yes. The qualia discussion is also open here. As I said, I’m sort of focusing on two “better than nothing” approaches that could theoretically work with spectral rendering as well, to avoid the initial “That looks like crap” response, while also being a “better” entry point for creative grading and such in folks work.

If there’s something that looks horrifically broken tonality-wise, or something crazy happens with exposures, those are the things that are very important to try and diagnose, similar to your first classroom scene which freaked me out.

Thanks so much again.


I’ve started a central place for testing so that it can work with a majority of DCCs including Nuke etc.

Do not use Looks.

There are two displays, and two views.

The displays are:

  1. sRGB
  2. Display P3

The only view to test is AgX in both.

Do not use the Look listed in Looks.

It can be found here.

1 Like

I see that this version has aliases:

But it still didn’t work in Blender:

I am guessing maybe Blender’s OCIO v2 update was not complete therefore aliases are not supported yet in Blender? That would be a bummer…


1 Like

That’s very odd. It should work for those aliases… Is there a string difference or something I can’t see?

I don’t know… I tried adding that line of code to TBD myself yesterday after our conversation, but it didn’t work for me. And now your version didn’t work for me either… That’s why I started doubting whether it was Blender, but Brecht basically comfirmed that Blender supports aliases… I don’t understand…

It seems Punchy Kraken look is mapping 0.18 open domain to 0.39 instead of 0.5? Why is that? Regular Filmic High Contrast can have sharper contrast while keeping the 0.18 to be 0.5, why not for Punchy Kraken?

I remember you said this for the ACES transform:

And this was in Filmic Blender’s doc:

Five contrast base looks for use with the Filmic Log Encoding Base . All map middle grey 0.18 to 0.5 display referred

I thought it was a good thing to keep?

Because it’s just a random pass at a jacked saturation / crunched look. No technical considerations.

The default has a consistent mid. Still up in the air.

1 Like

Oh Okay. My asthetic taste is still kind of attached to the 0.5 look but I guess it is more of an asthetic choice of a demo look that doesn’t matter that much. I find myself adjusting the “Gamma” inverse power setting in the CM panel to 1.35 to have it become 0.5 though.

I haven’t had a chance to read the full thread yet, so apologies if this has already been brought up.

For my Arnold for Blender add-on, we had to include a full copy of Blender’s OCIO profile because the current version created some issues with naming. The diff of what we did can be seen here: This might be an Arnold-specific issue, but it doesn’t hurt to bring it up.


CDL is simply a power function, so it would change exposure positions. If you are running the newer Blender, you can actually create your own CDLs in the compositor using the log, the CDL node, and a proper output. Then you can transcribe the three values and slap them in.

It’s a valid issue given that XYZ should have been CIE XYZ 1931 or something to remove the ambiguity with geometry etc.


A question about spectral image formation, how does it work with spectral images when the value fed to it is the OCIO-config-specified Scene Linear space’s RGB triplets with negative values amount them?

I have been wondering about this, both TCAMv2 and the current AgX sort of work for Spectral even after I use a denoise node to clamp the negative values. I also read from one of the AgX commits that it also clamps negatives. How then, does it know that this pixel contains color that is out of the working space or the display space’s gamut? I once thought it make use of the negative values and convert the pixels to something like CIE XYZ space first and do some sort of gamut mapping. But since it doesn’t make use of the negative values at all, how does it do it?

For example, let’s say we have an EXR saved from the spectral Cycles branch, two of the pixel values in the EXR are BT.709 (0.7, 0.5, -0.3) and (0.7, 0.5 -1), after clamping the negative values they would both become (0.7, 0.5, 0), then how should it know the original XYZ value for each? Or am I completely mistaken about the process and the negatives just doens’t matter? Or if future development requires better handling of negative values, would the denoiser become a problem in that it kills the negative before it can reach the view transform?

This is a terrific question. The question involves mapping inputs to spectral, and then spectral to the working spaces. It is a tiny bit away though. Quite a few little steps to see if this is shorter term feasible.

The current approach can be tested against values with wider-than-BT.709. Some of the images in my GitHub Testing_Imagery repository have wider than BT.709 input values. While it does indeed clip the RGB, the results still “work” in the shorter term “Good Enough” sense of balancing the complexity of the approach versus results. I’m sure spectral renders would be equivalent.

1 Like

This is actually a terrific question that deserves attention in its own right.

What I try to have folks ponder is meaning. Negative values in a tristimulus system are meaningless relative to the “observer” they are currently referring to.

That is, if we had a spectral render and then transform it to sRGB, we have two observers at work; one is the spectral observer, which live as an array of tristimulus values out in the CIE standard observer projection, and the other is a projection of say, BT.709. Once we get to BT.709, the negative values are meaningless nonsense. Just as no display can display less than black hole zero, relative to the BT.709 projection, the values are meaningless.

Back in the other observer land, the CIE Standard observer, we could say that those values hold meaning. So the question becomes… can we give those values meaning in the smaller working space of BT.709? If so, how would we do this?

This is where the notions of the observer “footprint” mapping come into play. The math we perform on those values though have to be carefully considered, as again, not all operations hold meaning relative to the BT.709 “observer” if you will.

Likewise, and this idea builds on top of the above concept, we have to be careful on the journey the values take. For example, imagine we are forming an image from the tristimulus open domain BT.709 values to a black and white representation. Those values in the open domain are “inspiring” the image if you will; we don’t want a literal representation, nor can we even achieve that due to the display limitations.

When we think about denoising in this context, we quickly realize that we likely want to denoise the image, not the open domain tristimulus values. What matters in terms of “noise” are the formed image frequencies, not the frequencies that exist in the open domain. The same would apply for “reconstruction” approaches such as “upscaling”. Here if we use “linearized” values, we will find that our resampling falls apart, and that we are trying to reconstruct the image, not the literal tristimulus points in the buffer. In terms of image here, again, we are interested in things like “Where is the middle range of values” relative to an observer observing the image, not the tristimulus values “middle” value, which is of course not perceptual-oriented.

Hence this leads to a rather controversial (sadly) opinion that we need to consider our information states, and that perhaps the “conventional” wisdom of merely dividing things into the rather arbitrary “scene” vs “display” dichotomy is an insufficient model.

I would strongly encourage folks to consider the idea that the image is a discrete image state, that exists in an interstitial place between the open domain tristimulus data, and the closed domain representation medium.

These are some pretty big questions, but one that have had a fair share of arguments about. I’m firmly of the belief that it is prudent to divide our thinking into three categories:

  1. Open domain tristimulus.
  2. Closed domain, formed image tristimulus.
  3. Closed domain image replication / reformulation.

This leads to exactly two positions for operations that may be required:

  1. Open domain tristimulus.
    1.1 Open domain tristimulus manipulations. Think of this as manipulating the “virtual light-like” tristimulus values in front of the camera.
  2. Closed domain, formed image tristimulus.
    2.2 Closed domain, formed image tristimulus manipulations. Think of this as manipulating the image state, such as denoising there appearance of the image tristimulus, or reconstructing to a higher resolution from the image tristimulus.
  3. Closed domain image replication / reformulation.

Hope that helps to make my sadly controversial opinion clearer. Under this lens, the “negative” dilemma is one that is firmly located at or prior to 1.1, where we would consider those manipulations as “working space dependent”.