Blender Filmic

Display referred has a minimum and maximum value that relates to a device. Think of it as using the light ratios of your screen. What happens with your display when you go brighter than maximum? You can’t. It is essentially meaningless. Now imagine trapping all of your work within those light ratio limitations.

Scene referred is an entirely different creature. Look out a window on any given day. What is black? What is white? They don’t exist. Just ratios from some low value to some massive value. On another day, or another planet, those ratios could be radically different.

Now think of a camera (device referred) capturing the same scene above. You have to make a calculated effort to figure out what you intend to make white, black, and what you seek to be for the middle of the camera’s dynamic range of recording.

In the last context, we undergo a transform that takes us from the scene referred domain (outside the window) to the device / display referred domain (camera / display). That transform is critical to keep clear in your head; there is always a dividing line between the scene referred domain and the display referred.

There are good reasons to do all of your work on scene referred (scene linear) data values, hence why the transformation in modern workflows only happens on the view, not on the data until you bake it into an encoded image.

If you manage to get the scene referred versus display referred division in your head, you are well on your way to removing all confusion. From there, it is much easier to build up knowledge, including fully comprehending colour spaces, HDR10 / Dolby Vision technology encoding, etc.

Removed content of post.
Reason: content doens’t contribute to understanding of Filmic for intermediate / advanced users

Removed content of post.
Reason: content doens’t contribute to understanding of Filmic for intermediate / advanced users

When importing a picture in Blender, it could be that a (good picture with high dynamic range) doesn’t fit, because blender in default vieuw says: everything higher than 1 = 1. But luckily there is Filmic, and we can have the whole dynamic range in view.

You have the Blender part mixed up. Blender works in scene-linear manner most of the time, view transform (what Filmic is) is applied to output (a view on a monitor is also output, as is write out), not internal processing. Compositor is scene- not display referred and happily works with whatever range you throw at it. So are shaders, but it is up to the user to make sense of things and not crank albedo over 1.0 etc.

Internal processing in Blender works in 32bit float values and all input data, including your example digital camera image is first linearized and then gamut transformed (if necessary, in ACES case for example) to working space. You can load your 14-stop camera image in and map it so that 50% gray maps to value 0.18 in scene-linear space. Most likely the highest values in that image land somewhere way above value 1.0 and are not clipped in any way. The actual clipping and mapping to display referred space happens only when this data is actually displayed on monitor or written into a file.

As troy_s puts it very nicely:

There are good reasons to do all of your work on scene referred (scene linear) data values, hence why the transformation in modern workflows only happens on the view, not on the data until you bake it into an encoded image.

Removed content of post.
Reason: content doens’t contribute to understanding of Filmic for intermediate / advanced users

The whole color management thing is not Blender specific, it’s the same with other software (even PS).

As troy said in so many posts is all about learning and and knowledge was never an easy thing (at least for me).

For people like me who are not native english speakers it’s even harder to follow with all this terminology and though I won’t pretend that I’ve understood everything 100% I think I’ve gained a thing or two.
On the other hand I’ve followed the fimlic thread from the early beginning on this forum, on Blender developer and also on the Rabbit hole and I have changed my workflow numerous times ( I still do).

Usually I have to read troys post at least three times to start undertanding but I always find it realy interesting and time-worth.

Removed content of post.
Reason: content doens’t contribute to understanding of Filmic for intermediate / advanced users

Linear means that the value of a pixel follows the energy at the origin.
I explain you better: Lets take 3 sensor’s pixels for example.
pixel 1 received as luminous energy 1
pixel 2 received as luminous energy 2
pixel 3 received as luminous energy 3
(values are purely indicative and just numerical)
When the device will make the transform of all the data received by the sensor into an image
it applies a function.
A linear function has this form: output = a*input+b
a and b are just coefficients. They depend on the specific function applied and don’t change the linear nature of the function.
So if a=1 b=0, the image pixels’ value (output) will be respectively 1,2,3
if for example a=2 and b=0 values will be 2,4,6
a=1 b=1 values will be 2,3,4
etc…
All linear function are represented as straight lines in a input/output graph.
A linear image is just an image whose pixel values (in display referred) follow linearly the original pixels’ values (scene referred).
Exponential function are not linear function, so for example apply a gamma of 2.2 will break linearity.
sRGB transfer function (gamma) is not linear, it will make the image not linear.

Stop is an interval of exposure.
It’s nothing fancy. 1 stop of light means the double (or half if is negative) of the amount of light.
A camera sensor receives a stop more of light if it receives the double of light.
In cycles you can consider adding a stop of light just doubling the total energy of the light.
To increase lights of 2 stops you must mutiply the starting energy of the light for 2^2
To increase of 3 stops… 2^3
In few words the formula is… newValue = startValue*(2^stop).

It’s the Academy Color Encoding System
Practically a series or rules to encode colors. I won’t go further, experts will do it if they want.

Think about scene referred data as the real RAW output of cycles, values have no limits, you’re not constraint between 0 and 1, not even with white and black.
Think about display referred data as the cycles RAW data passed through a series of functions to encode them, to bind them into a display device limits.
Now what does it mean work with scene referred data? It means working with a wider amount of data and with no restriction, with more precision in math operations.
To make you understand, let’s assume you want to make a simple operation of 0.8+4.
If you do it in display referred domain, it will return 1.
If you do it in scene referred domain, it will return 4.8.
You may think that it will be encoded then anyway to 1 after the transform to display referred. Yeah, but you have not just 1 value, you have a series of values linked each other. So a smart view function with a wide dinamic range such as filmic, could be more precise into keeping the relation between values.
Hope it is clear enough.

Dont think this way, try to understand by playing.
No one here has a good workflow to suggest, we’re all playing.
What should we advice you?
For the pure white thing try to summarize the more as possible info you have…
albedo is the ratio between energy returned and energy received.
If you want the white to be more white you have 2 choices:
Fixed albedo: Increase the energy received (light)
Fixed light energy: Hack the albedo.
It’s not that there is the police right there ready to arrest you.
You know that filmic has -10 to +6.5 stops (0.18 mid grey ref)?
Wanna hack to get pure white objects with light energy 1?
Cool, put 16.29174 into your RGB albedo and hack it. LOL.
(0.18*(2^6.5))
See what happens by playing, if it ruins everything causing issues… just play and have fun.:smiley:
In the end do what satisfies you, but don’t stop to educate you to more correct things step by step.
If you get angry at every info missing showstopper you won’t get far.
Good luck.

I would echo both Sunburn and Kesonmis; take your time. Don’t panic. I and a bunch of other peeps are more than happy to help you. It takes a culture.

If you look at the incredible works folks have been achieving, it is worth the investment. You will elevate your work and gain a real sense of control. I promise.

If one starts by trying Filmic and twiddling knobs and mucking around, that is excellent. If you read the origins of it, you will see that the goal was and always has been to expose folks to some important concepts they might have been unaware of. Again, they are hugely important and have a tremendous impact on your work.

Also, bear in mind that display referred applications such as Photoshop are part of the problem; many mental models have been built atop of them. It takes a while to tear down the accumulated cruft.

The most ideal way to get a handle on a scene referred workflow is to think of it as three distinct phases:

  • Input.
  • Reference.
  • Output.

Input is your input asset. Anything from a log encoded image to a scene referred EXR.

Reference is the mixing bowl. This is where we do all of the math and manipulations. This must be using linear energy ratios or else we end up with dark fringing, bad math, and broken mixes. In Blender, this is a scene referred linear reference in the compositor and the output from Cycles. It remains scene referred linear until either A) output or B) someone manually deviates from the scene referred linear by adding a node that bends things.

Output. This could be a display or an encoded file. These are frequently nonlinear energy ratios, although in the case of EXRs they could also be scene referred linear. The nonlinear encoding can be used to cram data into a particular file format or into a display range. In Blender, the transform to the nonlinear world happens either via the file saving code (sometimes with the help of OCIO) or via the View / Look transform via OCIO. This is a one way transform, and the values in the reference remain strictly scene referred linear.

Don’t give up. As you can already see there are more and more folks able to explain things. Soon you will be one of them.

Colour management literally is the management of your colour. Colour is pixels, and the pixels are your work.

Don’t let the foolish convince you otherwise, and most importantly, keep asking questions.

OK, so say I want to use a RAW photograph as a texture, so it has accurate colors in it. Say its in DNG format and an 18% grey card is visible in it. So how would I go about it?

It is currently 16bit, and takes up a lot of memory and I would like to render on a GPU so I would also like to convert it to 8 bit image but keep the color and tone ratios accurate, what then?

What I got as far: I use Photoshop and in Color settings it is possible to create a custom RGB ICC profile, so I can convert my camera’s color space to linear RGB, I use gamma of 1 (so linear, right?), the white point is the same as the temperature of the lighting used when I took the photo, so that’s seems to be all logical. I am not sure what the primaries should be as I suspect I would need to use the same as Blender uses in it’s working space:

OK, so say I managed to guess the primaries correctly and save the custom profile and then use Camera RAW utility to output the Raw image to this profile in TIFF format correctly. I load it as a texture.

Now should I use color or non-color data? I suspect I don’t want any additional color management, and the values in the file are linear, so I guess non-color data.

Then, I need to shift the values towards the correct brightness, but I have no idea how Curves or any other color correction nodes work in Blender, so I think - wait a moment, I know what Math node does, so I decide to separate the RGB and add or subtract the same number from R, G and B channels until it maches… What should it match? I think if I set it all up to go through emission shaders with strength of 1, one emitting RGB(0.18 0.18 0.18) should match my greycard. Now I am ready to use it, am I not?

So if I want it in 8 bits, I do the same only output 8 bits with Camera RAW. Does any of this make sense?

dcraw -T -4 should deliver a linear 16 bit TIFF. You will need to linearly scale it to align to your current middle grey. Eight bit data is not enough to linearise colour. As such, you must use 16 bit imagery at a bare minimum, encoding depending.

By default, the transform dcraw and other tools that are based on some of the dcraw data will output REC.709 based primaries. These are the same primaries Blender’s default configuration and Filmic use, so no further adjustments would be required.

Remember that colour versus non-colour refers to the data. If it is an emission such as a background plate, it is a colour and needs to go through the transform chain.

Linear scale or CDL slope will, with no other adjustment, be an exposure shift. Using nodes it would be multiplying by 2^NumberOfStops.

I appreciate the answer very much but I am sorry I still do not understand. I imagine it must be frustrating to explain every simple thing, but could you please expand on what exactly you mean by ‘dcraw -T -4’? I am not familiar with the term. Could you suggest any possible software tools or workflows that can do that in simpler terms?

I want to use the textures for say diffuse color in a regular diffuse-glossy mix with Fresnel, not as a background image, so those are reflectance values. In scene space it is not color, is it? Since it’s already linear, Blender should not try to convert it to linear one more time, therefore, non-color data. Isn’t this correct?

Never. I would prefer my investment of time and energy over the past fifteen years save someone else that time and pain.

dcraw is an extremely valuable tool. I would encourage you to learn how to use it. Most commercial software uses some component of it somewhere.

It should be as straightforward as “dcraw -T -4 FILENAME”

See https://www.cybercom.net/~dcoffin/dcraw/ for more information.

This is an interesting question. I believe the following is accurate.

Albedo is a measurement of reflectance based on the three primary lights hitting the shader. That is, 0.5 red is 50% of whatever colour red light is being cast upon it relative to the reference space lights. This means that in fact the ratio is linear, but also a relative colour transform. So it would need to be transformed according to reference.

Remember that colour ratios can be scene referred linear and data can be linear. They are two completely different things. Alpha is a linear and normalized ratio for example. Albedo is also a linear value, but it relates to colour data.

Even though albedo seems like a pure linear data value, it isn’t. It is a ratio of reflectance relative to the reference space reddish, greenish, and bluish lights projected. In this sense, the albedo is bound to the reference primaries, and should be transformed to the reference ratios. This is 1:1 when your source is REC.709 based and the reference is also REC.709, but imagine if our reference were to use different coloured lights? The albedo ratios would then need to be transformed relative to the reference. Hence, setting as linear colour will yield correct results.

TL;DR: In most instances where the shader data is bound to a colour, it likely should be a colour value, not non-colour data.

I want to use the textures for say diffuse color in a regular diffuse-glossy mix with Fresnel, not as a background image, so those are reflectance values. In scene space it is not color, is it? Since it’s already linear, Blender should not try to convert it to linear one more time, therefore, non-color data. Isn’t this correct?

Linear color data is also not converted, you don’t need to set it to non-color. I think the problem with albedo values and their relation to a photographed image can in a simplified way be illustrated by this schema:

[ATTACH=CONFIG]486484[/ATTACH]

The Combined result is what you get as a photograph, but albedo value is something similar to diffuse and glossy color components. As you can see, there is a lot more going on and without having very good control over what the other components are, it is almost impossible to deduce the actual albedo values from a combined image. The reference space lighting troy_s wrote about is on this drawing the direct and indirect light components. When we multiply the albedo with incident light, we get the reflected light.

And on top of that you have to get to the base Combined result first from your actual image file, which is another set of problems. Think of the combined result as light entering your camera. But to get the actual image, this color data goes through lens, through sensor with its technical limitations, analog to digital converter, digital data processing and then it is written to some file, from which we try to undo all those steps to get back to the original combined, and from there to our albedo component.

Regarding the difference between scene referred and display referred I have found helpful the analogy of a real scene (looking out of a window for example) and a video camera capturing that scene. What you see in your video camera screen is a limited (both spatially, in dynamic range and color) view of the actual scene. By changing the settings of your camera you can change the characteristics of the image, but you will never be able to capture the whole scene in all its fidelity due to technical limitations of your camera. Your camera screen can never be as bright as sun for example, but by using filters you can capture the image of sun without clipping. Only the rest of the scene will be very dark in this case. What filmic view transform is, is a very well prepared set of camera settings to capture the scene in most pleasing and meaningful way. But it will not and can not be a full representation of the scene due to the limits of your viewing device (camera screen, monitor etc).

In the analogy above, when we speak about the brightness levels in real scene, we speak about scene referred values. And when we speak about color values on camera screen, we speak about display referred values.

It actually is potentially transformed. This is the difference between a colour and a non-colour data value; both may remain in a linear relationship, but only the non-colour would remain outside of OCIO entirely untransformed.

First of all, thank you, Troy_s! Your help has been extremely valuable. I think it is saving me a few years of misery and mistakes already!

I also thought linear color data is likely to be somehow transformed, but I had no idea how and if that transformation will in fact be correct. It makes sense that when non-color data is set it no longer has anything to do with any color management and it is clear to me now that this is not the desired result. However I am a bit cautious to trust any software to just do everything correctly regarding color management, so I want to know for sure and will test it to the best of my abilities before basing my entire workflow on it.

I have had terrible experiences with well known and popular software and color management. For example, as far as I now know Chrome web browser, the most popular browser at the moment, always assumes that your monitor’s color space is sRGB and even ignores Windows settings, however it is considered ‘color managed browser’. That is far from the truth if you use a profiled monitor. There is no documentation about this as far as I know. Things like that make me really paranoid about color management.

Keep it as basic as possible in your mind and it will help to get things clear.

At risk of oversimplicity, RGB means “three lights”. What we don’t know from an RGB triplet without further information is what colours those lights are. We also don’t know the intensity mapping of the values. When you hear linear / nonlinear / log / “sRGB” in many areas, that refers to the intensity mapping. That is only one half of colour.

The other half, the “what colour are the three lights?” question, is the other half. So we can say that something is scene referred linear (explaining what the ratios in the values mean) and still not have enough information about the RGB values.

Once we know what the colours are in one source context, we can re-mix the colours of another context to arrive at precisely the same colour. An analogy might be something like a guitar where we might know what fret to play but without knowing the tuning, we can’t play the exact note. If we know the source guitar tuning and the destination guitar, we can change the positions of the notes and get precisely the same song.

In RGB terms, every colour space is “tuned” differently.

Good! You should be.

The only remedy here is to educate yourself well enough to know how to test things and identify issues. You can end up in a worse spot trusting software and some alchemy of knob twiddling.

It isn’t as challenging as some might lead you to believe.

Also remember, colour adjustments for colour critical work such as display calibration and profiling is only a small fraction of what falls under the umbrella of colour management. The larger chunk of concepts has to do with much of what this thread discusses.

OK So are Blender’s internal values for RGB used for calculations(I assume it uses RGB, not single wavelengths like Maxwell) just numbers and do not represent anything until we decide to use some sort of color system, like for example the Filmic Log Encoding Base? Only then we assign meaning to the values. So out of curiosity, what would we assume the primary 3 colors would be in Filmic? Electromagnetic radiation of wavelengths 700nm, 525nm, 450nm, or is it CIE RGB primaries 700nm, 546.1nm, 435.8nm, or something else?

Or am I wrong and in order for the calculations Cycles does to be correct it is necessary to assume the RGB it uses do represent some sort of real physical lights before color management and then convert that color space into whatever color space we want to output?

My brain hurts :smiley:

Removed content of post.
Reason: content doens’t contribute to understanding of Filmic for intermediate / advanced users

Robwesseling, I don’t think you need to worry about the white being 1.0 so much. What is white? Say I have a sheet of paper in my office and I light it with 4000 lux of light with a few really really strong lights, is it white now? Yes, most definitely, it is! It’s blinding my colleagues, as the light on our desks is about 200 lux. But I take the same piece of paper outside to the balcony, it’s a sunny day it is now getting 125 000 lux of light. What was “1.0” white inside, now is over 30 times “whiter”! There is no white in the scene. It doesn’t matter that much. When you take a photograph, then you assign the lightest bit visible to white and the lowest lit bit to black - that’s all. The result with white being lower than 1 is meant to be color graded. You can do that in Blender, or using something else, but the idea is to map those values to your display, or to whatever medium you are going to use the renders on. So you(and me as well) should try to learn more sophisticated and more correct techniques of color grading, like with ASC-CDL, but until then, you can take it to Photoshop and edit it same way as you would edit any photograph that comes from your digital camera, only now when you adjust the white and black and contrast and everything to your artistic liking, you get a nice and natural gradient of tones across the dynamic color range in the picture as opposed to burned out whites and ugly looking shadows, incorrectly saturated highlights and the weirdly strange look that you get with default and feel there is something wrong with it but cannot say what it is.