Blender Filmic

How would I inform Blender about what color space the file I want to use as a texture is then?

If it is colour data and linear, it must be set to linear,

How if it’s a texture? I see no functionality in Blender to do so.

Oh, if I set a color then it acts as if the color is linear if I compare it to texture. if I set RGB(0.5,0.5,0.5) it differs from RGB(127,127,127) in a texture set to color data. That’s bothering me a bit.

I guess it’s safe to convert any 8bit image to sRGB with an external icc-aware software

We need OCIO color space profiles for this, but at the moment they are rather scarse in Blender. One way is to copy the colorspace descriptions from some other softwares OCIO config, Nuke for example has a pretty good set with a lot of different gamut transform matrix and transfer function descriptions in its ACES v1.x configs.

To set the color space for a texture file you would choose the appropriate profile in Image Texture node. Confusing part is that you can’t set it in node block but you must select the node and then set it in the properties panel (N key) where you have a Color Space dropdown. Why it works like this is beyond me. For environment textures you can’t set color space (nice ad logical, huh?), so you must do the transform beforehand if it is not linear data with rec709/sRGB gamut. And all procedural textures are generated in working space, which at the moment is a linear space with rec709/sRGB primaries. From this fact it should also be apparent why linear gradient texture (which literally generates a linear value ramp) is not perceptually linear.

I would never blindly trust color space settings read from file metadata because this is a recipe for crap. Explicitly manually setting the right profile is the only trusty way.

What I was looking for was a way to match a photograph to reflectance values according to a pallette that is in the photo and has known values.

So in order to do that I use linear color space(output raw file to linear colors via dcraw utility or custom Photoshop’s icc profile) and use Photoshop’s levels to shift and scale the value range according the pallettes, then convert the colors to sRGB, that is correctly recognized by Blender. Now I am ready to use the textures and start guessing the surface physical properties trusting that the reflectance values are as accurate as I can get them to be. Does anybody see any fundamentally wrong concepts here?

I ask you experts this:
Would this be a too much ignorant cheat, or totally wrong, in matter of existing textures adaption for albedo?
If it could be an “acceptable” solution, does it makes sense transform 8bit textures in 16 bit tiff gamma 1 profile for example in PS before doing it? I mean, it’s not that more data comes from nowhere as a miracle.

Thanks.

As far as I understand the texture is going to be mapped to a lot wider range than 8 bits per channel when rendering anyway, so this should not accomplish much more than just using more memory for no benefit. I would like to use 8 bit textures for saving memory after I adjust the brightness in 16 bits. I think it should be enough. 16 777 216 colors seem to be enough for me. It’s just that while working and converting between color spaces some parts of the dynamic range get compressed or expanded so quality will definitely be lost there if the texture is 8 bits during that process as far as I understand.

I just noticed that. :smiley: Thank you! I never thought to check there for these settings, I believed they do not exist.

Be careful with this. If the references aren’t identical with identical assumptions, the transforms are arbitrary.

Also, sad fact: Cycles utterly ignores all colour space transforms. This should be a high priority to fix.

Lukas has a patch, but more pressure is required.

Yep, I wouldn’t start messing around with it after reading through the patch thread. The whole color management seems like a mess, but I understand there will be lots problems from opening that can of worms. I’m not familiar with Blender source, but seems that it needs a major rethinking of the whole color data handling logic in a unified way. Sacrificing memory might be hard, but because all this byte texture stuff, different byte buffers for painting etc are separate and CM wise totally unconnected pieces, only logical solution seems to unify them all in a proper color managed half-float land, as discussed in patch thread.

Removed content of post.
Reason: content doesn’t contribute to an understanding of Filmic

Certainly no expert here.

I believe the subject of a monochromatic albedo converted to a colour can be solved using generic basic algebra.

If we assume we know the primaries of the reference space lights, the target albedo, and the target colour as chosen via a colour picker, we should be able to solve the problem.

We know that Filmic and default Blender use the REC.709 primaries, which are normalized luminance weighted roughly 0.2126 red, 0.7152 green, and 0.0722 blue. If we use a colour picker to reveal linearized and normalized RGB values, the formula for luminance would be these weights multiplied by the respective channels. This will yield a grey scale mapping of the colour in question.

So if we pick a colour using the colour picker in Blender, and examine the RGB values which are linearized, we can probably solve for albedo by finding a simple linear scalar value.

Multiplier * ((0.2126 * RedTarget) + (0.7152 * GreenTarget) + (0.0722 * BlueTarget)) = AlbedoTarget

Solving for the Multiplier, we get:

Multiplier = AlbedoTarget / ((0.2126 * RedTarget) + (0.7152 * GreenTarget) + (0.0722 * BlueTarget))

Assuming you have found a rough albedo value online of your target material and have selected a colour via the picker’s RGB values, you should be able to multiply the red, green, and blue channels by the multiplier and get a scaled albedo colour ratio. You can test this by multiplying the resultant red, green, and blue values by the aforementioned luminance weights and summing them. The result should be your chosen albedo.

Note this assumes a 6500 degree Kelvin illuminant used for the albedo monochromatic value and ignores all spectral magic that may happen as hinted as in JTheNinja’s post. We could of course use different weights to calculate for a different illuminant if one were so inclined.

If using a texture, the RGB to BW node should use the correct luminance weights if you wanted to automate the above formula for an image texture. Assuming it is a normalized linear image (as in via dcraw -T -4) you could do the above formula using a node cluster and get a reasonable output assuming I haven’t entirely pooped the sheets on the math.

I am not sure I understand this completely, but if we use a single value to calculate a multiplier for a texture, then that single value would be correct, however in a texture we are dealing with a range of values, so this would not take into account the position of the black point in the range so values around the tone in question would most likely be incorrect. Am I making any sense or am I missing something?

It was more intended as per your inference; for locating a colour ratio within an image. Of course a photo isn’t terribly wonderful to deduce albedos. In theory, in addition to the math above, it should be possible to scale the entire linear range down to the minimum and maximum albedo, then offset.

ACES are 4 very interesting letters to me now. I wonder how would I get my cheep DSLR’s RAW photos into ACES color space. I might get a IT8 calibration target, that might be usefull. Anybody here know anything about moving DSLR camera RAW to ACES color space?

The easiest way is to use dcraw to convert your camera raw image to XYZ and use that to convert to ACES (using Resolve etc). If using Resolve, you must set your project to use an ACES color managed system, not default YRGB. AFAIK Resolve does not directly read camera raw images from DSLRs, otherwise it would be possible to bring them in directly. From there you can export the image as an exr with ACES gamut.

If you want to calibrate the camera, or produce an IDCT (input device calibration transform) you need to do lots of measurements and whatnot and this is not a trivial thing. I believe IDCTs are very rarely used, most people are happy enough with camera model specific IDT. The difference between them is that IDT is an ideal transform from an “average” camera so to say (an average Canon 5dMkII or AlexaXT or whatnot) and IDCT is a transform that brings that one and only exact camera you are shooting with to that ideal camera space. The order of operations is: image data > IDCT > IDT > (here we are in ACES gamut) > RRT > ODT > ODCT
RRT is the reference rendering transform which brings images from ACES land to an idealized display referred land (a view transform basically) and ODT is the actual output device transform which makes that image edible for a concrete monitor, projector etc. And as with IDT, there can be an ODCT which is for one device for calibrating that exact device.

Keep in mind that ACES land has two different gamuts, ACES primaries 0 and ACES primaries 1. The AP0 extends way beyond real colors and can produce imaginary colors. AP1 is just a wide working space which does not extend beyond visible colors, linear color data with AP1 primaries is marked as ACEScg. The work is usually done in AP1 and AP0 is used for storage and transport. An exr should never contain AP1 gamut data, it is clearly expressed in specs, but in practice this rule is sometimes ignored. To pull it together, the data moves something like this: ACES AP0 > AP1 > do work here (grade etc) > AP0 > save to file. This is when we speak about linear color data. But ACES specs describe log transfer functions aswell and there are two flavors of them, ACEScc and ACEScct where the difference is in handling the toe part of log curve. Log space is more used by colorists as color controls respond a bit more logically there than in linear space.

I am confused. Isn’t Y in CIE XYZ the same as reflectance/LRV/Albedo? It is scene referred, not device referred value, so this data is not in my camera’s raw file because the camera uses a specific exposure/aperture. So the way I am thinking I need IDCT to convert the values to XYZ. I was just thinking of looking into ACES as I might find more information on doing that this way.

Y in XYZ is the luminance component but it is just as far from albedo as a luminance calculated from sRGB image. XYZ space is not magic, there is nothing special to it in addition to being a widely used reference space and having some actual measurements as the basis for its primary components. You can easily find the 3 by 31 or similar matrices for calculating the XYZ values from spectral dat, but spectra of the light entering your camera is not representative of albedo but a product of a lot of components. Color gamut in itself is not scene referred or display referred, you can transform an sRGB image into XYZ, it does not make a qualitative difference.

You are confusing what you want to be in your camera file with what there actually is. Your camera raw file contains a set of numbers that have no objective meaning until you start attaching that meaning to them. If you know how the amount of light falling on sensor and produced value are related you can deduce the light flux. If you know the aperture you can deduce the actual absolute light levels in scene. If you don’t know these, you simply have a set of numbers which you can hope, have at least some logical relation to one another and the actual scene.

To get from camera raw data to some more easily used RGB space, first a 3x3 camera matrix is applied that takes these values to XYZ space (here dcraw helps you). The camera matrix has little relation to what the filters on CMOS sensor photosites are, the matrix is calibrated to produce the best and most accurate color reproduction that engineers have been able to suck out of that sensor, but can easily have imaginary primaries for example. From the XYZ intermediate space the second step is to transform the data to some RGB gamut like ACES, sRGB, Adobe Wide Gamut RGB etc. These transforms don’t give if you think of your data as being scene referred, display referred etc. Actually, your data is from start sensor referred, meaning that sensor noise floor and saturation levels already “window” your data.

Now, to put ACES into previous context, ACES AP0 gamut is just another set of RGB primaries with no magical properties. Actually it makes no sense why it was conceived in the first place as XYZ could have been used just as well. An IDT brings camera raw data to ACES gamut. It does the same as raw > XYZ and XYZ > ACES combined, but as 3x3 matrices can be concatenated to single matrix, IDT just merges them into one transform. IDCT is a separate 3x3 matrix or a more complex transfer function with the purpose to even out different device variations before applying IDT. It will not get you any closer to albedos if data itself contains ambiguity (see the schema I attached some posts ago for relation between albedo and light entering your camera).

Btw, a fun piece of info I once stumbled on, that is not very intuitive. Almost all digital color cameras are able to capture full visible spectrum. A camera sensor is not “gamut clamped”, the clamping is applied then moving data from sensor space to some RGB space where we want to have real colors as primaries and as the whole visible spectra does not fit into a three-component RGB space so that its primaries are real colors, we have to throw some stuff over the board. The main reason why camera data is usually not handled in a superwide space is the color accuracy. Very saturated colors are just off and distorted. It could be remedied by more complicated camera transforms but usually only a 3x3 matrix is used and that is constructed to respect most normal saturation levels.

How would I take a material and measure what percentage of light it reflects? Can I do it with a spectrophotometer?

Isn’t ACES supposed to be a universal format that would in theory contain the same color information not depending on the camera used? Isn’t IDCTs supposed to convert sensor refered data to as close to scene reffered as possible?