Colour Science (Was: Blender Filmic)

Since I prefer to use the Filmic view for a while now, I learned first that I had to use more correct albedo values. Meaning, in my case, that the albedo values were way to high. After a while I noticed and realized: “My (pbr) materials appear to have a way to strong specular reflections now”. Even with the default specular setting of the principled shader. It’s look way to strong. Any idea what I should do? For example in Cynicat’s pro shaders I can use grey instead of white in the glossy shader. (lower the value of the specular reflection)

I am a far cry from an albedo expert. I would suggest trying to roughly match recorded and known albedo values. If you are colouring (aka non-achromatic ratios of RGB albedo) a surface, aim to have the value average to the albedo value expressed as a luminance ratio. The weightings are roughly 0.2126 red, 0.7152 green, and 0.0722 blue. That is, when you multiply those weights by your RGB triplet, they should sum to the approximate albedo value I believe.

The principled shader’s specular weighting is based on real light interactions. If it’s too strong, you may actually have too low of roughness. Also, pay attention to objects around you - a lot of things are shinier than you might think at first. For example, if you ask “is asphalt matte?” most people would say “yes!”. But if asphalt is matte, how does this happen? http://3.bp.blogspot.com/-9HPn5qPF3uw/Us0LlTM0bMI/AAAAAAAAOcc/potOB-YKAgg/s1600/IMGP0609.JPG

A lot of things get very shiny if viewed at a grazing angle with a bright light hitting them. Just something you need to get used to when lighting. If it’s too much of a problem, the “specular” slider on the principled shader can reduce the reflections by pulling back the IOR of the surface.

Worth noting that opaque, breakup, or semi-transparent / translucent gobos are a huge aid here as well. Doubly so when you can make them invisible to camera.

Don’t have examples with me to post right now, but I’ve figured out how to transform between color spaces with matrix transforms in OCIO and use the provided LUTs for the transfer function (both normally and inverted). This does lead to a question though about the following:

Say my example has a Cineon transfer function. Once I know that, is there a way to decode that in OCIO without a LUT? Or are LUTs the one and only way to do this (in OCIO or other ways)? If so, do I need to figure out how to make LUTs in order to properly decode the transfer function of any footage/still that doesn’t have one provided already?

Also, does it matter what order these operations (transfer function, matrix transform) are performed in? I’ve been going in the order of transfer function then matrix transforms, but haven’t tried the other way around yet.

Remember that transforming primaries needs to have identical white point colours as well. Also consider that this may, or may not, lead down the path of gamut mapping.

You will need a LUT. 4096 for a 1D is sufficient, with a shaper if required. A 65x65x65 cube is sufficient for 3D LUTs. There is discussion to add in expression handling, but that is not currently available.

For Cineon, there is an existing LUT available via the ACES repository.

Matrix transforms for colour spaces must be done on linear sources, otherwise the linear algebra will fall apart. So it is always transfer function to linearise the source, then RGB to XYZ, then XYZ to RGB. The two matrices can of course be concatenated. Again, make sure you match the white points via a Bradford or like adaptation matrix.

Recently I follow the post and the discussions about color space and I find them very interesting.
I mainly use octane render and I’m getting closer to blender for modeling.
I believe that octane, as blender (cycles) behave exactly the same way, that is, if I understand correctly, they both reason for numbers (RGB), without any consideration on the color space.
Therefore, it would be enough to arbitrarily choose an imput color space, valid for all that we give to the render engine, to have a correct result.
The problem is that, if we suppose that we are working with the sRGB (linearized) for example, we should convert all the images that do not use this color space.
It would not be a big problem, except that the conversion of an image, for example in adobeRGB, in sRGB, could produce negative results.
The image, if saved in .exr, maintains negative values, but octane, for example, cuts all negative values ​​in the loading phase, partially negating the transformation of color.
I don’t know if the same thing applies to cycles, as I don’t know if internally the render engine would then have problems at some point in managing negative values.
The alternative would be to convert the whole input into a huge color space and take it as a reference, but it would become very heavy to convert everything into a color space not widely used like the sRGB.

While this is more or less typically correct for a path tracing engine, the software developers frequently overlook colour / pixel management, and make poor design decisions that inhibit proper pixel management.

Cycles had this issue up until very recently, and has a few flaws remaining. Octane very likely is completely incapable of proper pixel management given what is available for information.

One must always consider their needs before choosing a working reference rendering space. Each space behaves differently and has a not insignificant impact on the output work.

This is true for all work. All input buffers and colour related values must be transformed relative to the working reference space. If there is a mismatch, the ratios do not mean the same thing that they intended.

When a colour encoding from a larger volume working reference space to a smaller volume working reference space, the values in the original larger volume may become negative in the smaller volume. Specifically, those values become non-data; they cease to hold any valid meaning in the destination because the values represent emission or reflectivity, and negative emission or reflectivity has no physically plausible meaning.

The values must be clipped.

A path tracing engine will only deliver meaningful non-negative emission calculations under physically plausible contexts.

All colour values must always be properly converted and transformed. Anything that doesn’t do so is broken.

It is also quite common to choose a wider working reference space, with the caveat that the working reference space may exhibit poor responses on the calculations, destroying saturation ratios or creating uncanny results. This is especially true if the working reference space uses imaginary primaries that exist outside of the physically plausible spectral model.

Working reference spaces must be carefully chosen based on desired aesthetic output and destination context needs.

Any opinion on this?
https://acescentral.com/t/aces-set-up-f … -ocio/2106

Then, how to include it with Filmic from git, if possible?

Well, as I understand it, it will become standard. Haven’t tried it with Filmic from git though.

I have a question about original Filmic. First I would like to thank Troy for all the wonderful work.
Difference between his Filmic and implemented Filmic is huge.

The question is about color space. What option do I use in input color space for albedo maps for Image texture node?
pokit_3416a75f4cea9109507cacd8e2f2aefc

I guess for the BW and Normal maps Non-Colour Data is still the correct choice.

And what does exactly Sequencer option do in Color Managemenet tab in Cycles?
Is that regarding the output format of the image?

If the image is an sRGB image, the proper transform would be sRGB OETF. I should do a better job and make a transform titled sRGB texture.

If the image is an actual greyscale image, it’s a colour. A normal map is indeed data, and is non-colour.

It determines the working reference space of the VSE. This is the magic cauldron state that all images are taken to before having math and formulas upon.

So the correct, or should I say better transform for grayscale images should also be sRGB OETF?
What about when I make multiple grayscale images from single albedo?
For example when I connect it through converter like color ramp or separate RGB channels.
Does it make sense to duplicate image texture, change the transformation to non-color and then plug it in converters, or do I get the same result from just plugging the original albedo map?
I know the result looks different, but I’m not sure what it does to the math of the map.

Is there any reason to change the sequencer from sRGB OETF to something else?
I’m asking because I’m having problems with correctly displaying .exr files in Affinity Photo.
I’m saving the .exr as full float multilayer. In Affinity there is a possibility of loading custom OCIO profile. So I load config.ocio from your Filmic folder. After I apply OCIO adjustment and define source as Linear and destination to sRGB OETF I get something similar to very low contrast preset in filmic but dimmer, despite that I’m using Base contrast in Blender. If the destination sets to Filmic Log Encoding then i get None preset with slightly stronger shadows.

Correct.

Separate definitely correct. Colour Ramp, nor any of the UI on any of the nodes, is colour managed, so it makes it a bit of a tricky answer. The ramp will take the post-transformed data and simply mangle it according to the input of the colour ramp. So in the case of the sRGB transfer function, you’d be colour ramping the display linear sRGB values.

Remember that the transform describes the buffer’s encoding to the software. So if the encoding is a linear set of values in a file, non-colour data applies if it is intended to be non-colour linear data. If it is linear colour data, then Linear is appropriate. If it is display referred nonlinear sRGB, then the sRGB OETF applies and Blender will transform it to display linear.

Albedos are normalized colour reflectivity ratios. They will always be colour transforms. Measurements and other such things are always non-colour, and it is worth thinking about the encoded state of the file they are in.

As much as I loved the VSE, it needs to be removed or have serious development to bring it into the proper design of Blender. The colour management and code paths are a complete mess within it, and will result in broken imagery. I can’t stress enough that it should be avoided like the plague for actual pixel output. For simply figuring out which frames to use in the compositor, it’s fine, but for outputting pixels it is a broken mess.

Affinity is awesome in that it supports OCIO. Sadly, last I checked, it didn’t support Looks. There are several methods to work around that limitation. One is to test to see if the looks are honoured if wrapped into a transform, and I can’t remember if they were. The other is to load the EXR and set the view to Filmic Log Base, then use one of the contrast LUTs as a file transform. @Gez is familiar with Affinity and colour, so perhaps he will chime in here.

Correct.
Affinity has the necessary tools, but the implementation co-exists with some legacy ICC crap that will get in the middle. It’s possible to produce the desired result, but you need to pay attention.
Fist thing you have to decide is your workflow:

  • Do you want to edit your EXRs and keep them scene-referred linear, or…
  • Do you want to produce display-referred output, ready for delivery out of your EXRs?

If it’s the former, you want to use Filmic as non-destructive view while you edit, and get rid of the transform upon savling, but if it’s the latter, you want the filmic transform baked to the pixels.

Assuming you’re after the latter, these considerations will help you:

  • Keep in mind that there are no looks, so if you’re using Blender’s OCIO you’ll have to settle with Filmic Contrast Base for a quick setup, or re-build the looks using other tools.
  • Affinity has an OCIO view for 32-bit images, but it also has OCIO adjustment layers. If the former isn’t enough (only Base Contrast or Log), then you’ll have to use those adjustment layers instead.
  • Affinity seems to ALWAYS use ICCs for converting from 32-bit float to integers! That’s unfortunate and will screw your OCIO chain unless you take care of it*

With those considerations it is possible to get away with Affinity for scene-referred editing and filmic display output.

*) Now the tricky part (edited for clarity):

  • You can reconstruct a filmic “look” by stacking two adjustment layers: First an OCIO colorspace (from linear to filmic log), and then a LUT (using one of the filmic’s contrast LUTs).
  • If you reconstructed the look manually, don’t forget to turn-off any 32-bit preview option you have set, as you’re taking the wheel and producing the transforms yourself via adjustment layers…

At that point everything should look correct, but you still need to convert that image from 32-bit linear to your desired display-referred target (usually 8-bit sRGB for delivery).

Affinity doesn’t let you convert from 32bit linear to 8-bit sRGB without applying its transfer curve.
So, if you perform the conversion at this point, the result will be wrong since the result of your stack is already an sRGBish image and the transfer curve (the one we usually call gamma) would be applied twice.
To avoid this double-up you need to perform two extra steps:

  1. apply an extra OCIO adjustment layer. This time from sRGB to Linear (that will bring back the result of your layer stack to a display-linear image). Your image will look darker, but that’s fine.
  2. Flatten your image, so all the transforms are no longer non-destructive and they are actually baked in the pixels.

Once that’s done, you can convert your image to 8-bit sRGB and the resulting appearance will be correct.

That way you’ll produce a display referred version of your artwork with the appearance you’d get from Blender’s CM panel. A bit tricky, but it works.

Let me know if my description of the process wasn’t clear and you need extra help.

1 Like

I think after that, if you choose the first option (ICC display transform) in the 32-bit preview panel, the image will look correct and not darker if I’m not wrong…

Yes, that’s correct as long as you re-activate the 32-bit preview after adding the last transform back to linear.

Note, however, that I suggested to turn it off in my previous message to keep it from interfering with the manual transforms chain. Having that preview on during the previous steps would give you an incorrect preview, as some operations would be doubled-up.

1 Like

Actually I was getting Very Low Contrast look after applying OCIO transformation (to sRGB OETF), and not the Base Contrast (If I remember correctly when I used 2.8 implemented Filmic it gave me Normal Contrast after OCIO). If I transformed to Filmic Log Encoding the image was really bright and washed out. Don’t know where to go from there so I just used sRGB OETF.

How do I get Filmic LUT’s in Affinity? Although that shouldn’t be much of a problem because I could get there via adjustment layers. I saw that transformation leaves much room on the highlight part of the histogram.

Just out of curiosity, what would be the benefit of workflow where I would keep my EXRs scene-referred?

To be blunt I’m a little bit lost in how to prepare scene for post production. It is still puzzling with the exposure controls. Does it matter if I change exposure in camera settings or crank up the light values. Does it have any difference for Affinity? I see some artist make their scenes dim and washed out or overexposed out of render engine but I don’t know why. And what about highlights like sun shining into the interior? From what I’m reading, sun should be about 15 times stronger than indirect lighting. Is that only thing that matters, to get the ratio of the lights right? Or would it be better to leave everything closer in values and therefore have more control in post?

I will leave the Affinity stuffs for the educated folks above.

All manipulations behave more consistently with radiometric results. Blurs, translations / rotations, flares and glares, exposure adjustments, you name it. They behave more closely to what mixing the actual scene lights would do.

If you increased all lights uniformly, and ignoring quantisation issues with float etc., there is no difference between changing the exposure at the camera or increasing / decreasing all of the lights uniformly. If you change only some light levels, while leaving others, thereby changing the ratios, then the two approaches differ tremendously.

If you are using scene referred data, an exposure adjustment is a multiply, and it should be identical in Blender as compared to Affinity or any other software that can do things correctly.

Uh… there are many reasons. If it is creative it is one thing. If it is a mistake, another. Many times, folks use the knowledge they have, for better or worse.

It’s all an aesthetic pursuit, so it’s up to you. Cheating ratios in photography has happened forever, and will continue. Getting the ratios “right” has a ground truth of what is right to you and your goal.

The problem with leaving everything to post is that you squeeze your information. That is, imagine a very flatly lit scene, where the contrast between two values is extremely close together. Now imagine increasing the contrast dramatically in post. You will have stretched the values so far that you will degrade your image tremendously as compared to getting close to the desired output “in camera”. In the former, you are taking limited data and stretching it to the point of breaking, while in the latter you are starting with a full range of data that is close to what you want, and slightly twisting the values.

Try this:

  • Open in Affinity your fancy .exr render made with the awesome Blender

  • Apply an exposure adjustment layer and put the same value that you have in the color management panel in Blender (if you haven’t touched it, this step can be skipped)

  • Apply an OCIO adjustment layer with: linear to filmic log

  • Apply a Lut adjustment layer and choose the contrast you’ve chosen in the color management panel in Blender. That files (filmic contrast luts) are in the Blender folder (Blender Foundation > Blender > 2.80 > datafiles > colormanagement > filmic)

filmic%20luts
And it should look the same as in blender.

Here’s what I see in blender:

Here’s what I see in affinity:

Here is the image (jpg) saved from blender:

Here is the image (jgp) exported from affinity:

Hope that helps and sorry for my bad english, it took me about 2458 hours to write this post :blush:

7 Likes

Thank you! I managed to transform EXR following your excellent guide :+1:
Only thing I would point out is the 32-bit display transform which should be set to Unmanaged.
Image%201
By default it is set to ICC Display Transform and then you get washed out image.

It would be great if there was an option inside Blender to turn off CM exposure affecting material preview.
Sure I could leave it at default value of 0 and crank up exposure in Film tab, but it only goes up to value of 10 which is not enough for interior shots if I want to get closer to real values.

1 Like