Blender Filmic

As far as I understand the texture is going to be mapped to a lot wider range than 8 bits per channel when rendering anyway, so this should not accomplish much more than just using more memory for no benefit. I would like to use 8 bit textures for saving memory after I adjust the brightness in 16 bits. I think it should be enough. 16 777 216 colors seem to be enough for me. It’s just that while working and converting between color spaces some parts of the dynamic range get compressed or expanded so quality will definitely be lost there if the texture is 8 bits during that process as far as I understand.

I just noticed that. :smiley: Thank you! I never thought to check there for these settings, I believed they do not exist.

Be careful with this. If the references aren’t identical with identical assumptions, the transforms are arbitrary.

Also, sad fact: Cycles utterly ignores all colour space transforms. This should be a high priority to fix.

Lukas has a patch, but more pressure is required.

Yep, I wouldn’t start messing around with it after reading through the patch thread. The whole color management seems like a mess, but I understand there will be lots problems from opening that can of worms. I’m not familiar with Blender source, but seems that it needs a major rethinking of the whole color data handling logic in a unified way. Sacrificing memory might be hard, but because all this byte texture stuff, different byte buffers for painting etc are separate and CM wise totally unconnected pieces, only logical solution seems to unify them all in a proper color managed half-float land, as discussed in patch thread.

Removed content of post.
Reason: content doesn’t contribute to an understanding of Filmic

Certainly no expert here.

I believe the subject of a monochromatic albedo converted to a colour can be solved using generic basic algebra.

If we assume we know the primaries of the reference space lights, the target albedo, and the target colour as chosen via a colour picker, we should be able to solve the problem.

We know that Filmic and default Blender use the REC.709 primaries, which are normalized luminance weighted roughly 0.2126 red, 0.7152 green, and 0.0722 blue. If we use a colour picker to reveal linearized and normalized RGB values, the formula for luminance would be these weights multiplied by the respective channels. This will yield a grey scale mapping of the colour in question.

So if we pick a colour using the colour picker in Blender, and examine the RGB values which are linearized, we can probably solve for albedo by finding a simple linear scalar value.

Multiplier * ((0.2126 * RedTarget) + (0.7152 * GreenTarget) + (0.0722 * BlueTarget)) = AlbedoTarget

Solving for the Multiplier, we get:

Multiplier = AlbedoTarget / ((0.2126 * RedTarget) + (0.7152 * GreenTarget) + (0.0722 * BlueTarget))

Assuming you have found a rough albedo value online of your target material and have selected a colour via the picker’s RGB values, you should be able to multiply the red, green, and blue channels by the multiplier and get a scaled albedo colour ratio. You can test this by multiplying the resultant red, green, and blue values by the aforementioned luminance weights and summing them. The result should be your chosen albedo.

Note this assumes a 6500 degree Kelvin illuminant used for the albedo monochromatic value and ignores all spectral magic that may happen as hinted as in JTheNinja’s post. We could of course use different weights to calculate for a different illuminant if one were so inclined.

If using a texture, the RGB to BW node should use the correct luminance weights if you wanted to automate the above formula for an image texture. Assuming it is a normalized linear image (as in via dcraw -T -4) you could do the above formula using a node cluster and get a reasonable output assuming I haven’t entirely pooped the sheets on the math.

I am not sure I understand this completely, but if we use a single value to calculate a multiplier for a texture, then that single value would be correct, however in a texture we are dealing with a range of values, so this would not take into account the position of the black point in the range so values around the tone in question would most likely be incorrect. Am I making any sense or am I missing something?

It was more intended as per your inference; for locating a colour ratio within an image. Of course a photo isn’t terribly wonderful to deduce albedos. In theory, in addition to the math above, it should be possible to scale the entire linear range down to the minimum and maximum albedo, then offset.

ACES are 4 very interesting letters to me now. I wonder how would I get my cheep DSLR’s RAW photos into ACES color space. I might get a IT8 calibration target, that might be usefull. Anybody here know anything about moving DSLR camera RAW to ACES color space?

The easiest way is to use dcraw to convert your camera raw image to XYZ and use that to convert to ACES (using Resolve etc). If using Resolve, you must set your project to use an ACES color managed system, not default YRGB. AFAIK Resolve does not directly read camera raw images from DSLRs, otherwise it would be possible to bring them in directly. From there you can export the image as an exr with ACES gamut.

If you want to calibrate the camera, or produce an IDCT (input device calibration transform) you need to do lots of measurements and whatnot and this is not a trivial thing. I believe IDCTs are very rarely used, most people are happy enough with camera model specific IDT. The difference between them is that IDT is an ideal transform from an “average” camera so to say (an average Canon 5dMkII or AlexaXT or whatnot) and IDCT is a transform that brings that one and only exact camera you are shooting with to that ideal camera space. The order of operations is: image data > IDCT > IDT > (here we are in ACES gamut) > RRT > ODT > ODCT
RRT is the reference rendering transform which brings images from ACES land to an idealized display referred land (a view transform basically) and ODT is the actual output device transform which makes that image edible for a concrete monitor, projector etc. And as with IDT, there can be an ODCT which is for one device for calibrating that exact device.

Keep in mind that ACES land has two different gamuts, ACES primaries 0 and ACES primaries 1. The AP0 extends way beyond real colors and can produce imaginary colors. AP1 is just a wide working space which does not extend beyond visible colors, linear color data with AP1 primaries is marked as ACEScg. The work is usually done in AP1 and AP0 is used for storage and transport. An exr should never contain AP1 gamut data, it is clearly expressed in specs, but in practice this rule is sometimes ignored. To pull it together, the data moves something like this: ACES AP0 > AP1 > do work here (grade etc) > AP0 > save to file. This is when we speak about linear color data. But ACES specs describe log transfer functions aswell and there are two flavors of them, ACEScc and ACEScct where the difference is in handling the toe part of log curve. Log space is more used by colorists as color controls respond a bit more logically there than in linear space.

I am confused. Isn’t Y in CIE XYZ the same as reflectance/LRV/Albedo? It is scene referred, not device referred value, so this data is not in my camera’s raw file because the camera uses a specific exposure/aperture. So the way I am thinking I need IDCT to convert the values to XYZ. I was just thinking of looking into ACES as I might find more information on doing that this way.

Y in XYZ is the luminance component but it is just as far from albedo as a luminance calculated from sRGB image. XYZ space is not magic, there is nothing special to it in addition to being a widely used reference space and having some actual measurements as the basis for its primary components. You can easily find the 3 by 31 or similar matrices for calculating the XYZ values from spectral dat, but spectra of the light entering your camera is not representative of albedo but a product of a lot of components. Color gamut in itself is not scene referred or display referred, you can transform an sRGB image into XYZ, it does not make a qualitative difference.

You are confusing what you want to be in your camera file with what there actually is. Your camera raw file contains a set of numbers that have no objective meaning until you start attaching that meaning to them. If you know how the amount of light falling on sensor and produced value are related you can deduce the light flux. If you know the aperture you can deduce the actual absolute light levels in scene. If you don’t know these, you simply have a set of numbers which you can hope, have at least some logical relation to one another and the actual scene.

To get from camera raw data to some more easily used RGB space, first a 3x3 camera matrix is applied that takes these values to XYZ space (here dcraw helps you). The camera matrix has little relation to what the filters on CMOS sensor photosites are, the matrix is calibrated to produce the best and most accurate color reproduction that engineers have been able to suck out of that sensor, but can easily have imaginary primaries for example. From the XYZ intermediate space the second step is to transform the data to some RGB gamut like ACES, sRGB, Adobe Wide Gamut RGB etc. These transforms don’t give if you think of your data as being scene referred, display referred etc. Actually, your data is from start sensor referred, meaning that sensor noise floor and saturation levels already “window” your data.

Now, to put ACES into previous context, ACES AP0 gamut is just another set of RGB primaries with no magical properties. Actually it makes no sense why it was conceived in the first place as XYZ could have been used just as well. An IDT brings camera raw data to ACES gamut. It does the same as raw > XYZ and XYZ > ACES combined, but as 3x3 matrices can be concatenated to single matrix, IDT just merges them into one transform. IDCT is a separate 3x3 matrix or a more complex transfer function with the purpose to even out different device variations before applying IDT. It will not get you any closer to albedos if data itself contains ambiguity (see the schema I attached some posts ago for relation between albedo and light entering your camera).

Btw, a fun piece of info I once stumbled on, that is not very intuitive. Almost all digital color cameras are able to capture full visible spectrum. A camera sensor is not “gamut clamped”, the clamping is applied then moving data from sensor space to some RGB space where we want to have real colors as primaries and as the whole visible spectra does not fit into a three-component RGB space so that its primaries are real colors, we have to throw some stuff over the board. The main reason why camera data is usually not handled in a superwide space is the color accuracy. Very saturated colors are just off and distorted. It could be remedied by more complicated camera transforms but usually only a 3x3 matrix is used and that is constructed to respect most normal saturation levels.

How would I take a material and measure what percentage of light it reflects? Can I do it with a spectrophotometer?

Isn’t ACES supposed to be a universal format that would in theory contain the same color information not depending on the camera used? Isn’t IDCTs supposed to convert sensor refered data to as close to scene reffered as possible?

You can google albedo or BRDF measurement, you’ll get an idea of the methods and devices used. You basically need a known light source and calibrated camera or some other capture device, plus the angle of falling and reflected light. More precise BRDF measurements give the reflectance values over the whole range of angles (for each set of falling-reflecting angles). A simple DIY solution would, I imagine, be to make a small ambient light box with uniform lighting and photograph the surface you want to get albedo for, at 90 degree angle. To get an actual percentage out of it you need to probably use a color sample with known reflectance value. Using that you can calibrate your image somewhat.

ACES is a universal reference space, so yes, all source data is converted to it and thus unified as much as possible. But what the data actually means is up to the user. If your data has the property of having linear relation to scene light proportions, it is scene linear. But the fact itself that something is in ACES gamut does not automatically mean that it is scene linear.

IDCT is for unifying different practical devices (that have slight deviations due to manufacturing etc) in preparation for IDT. Nothing more, nothing less. Whether the data you move around is more or less scene referred or not, is not related to what the transforms are meant to achieve. It is usually taken that raw sensor data is approximately scene-linear, but it is approximately true (pun intended) for only a range of values.

Just as a hammer works either you smash your finger or nail, color transforms work independent of the meaning you attribute to the data you are pushing around. This is why it is notorioulsly difficult to achieve an actually numerically meaningful and precise color reproduction, the whole color management can do nothing if you don’t have a very good grip on what the data you are working with actually expresses. The CM does not know that and actually does not care. Referring to hammer analogy once again, hammer doesn’t give a duck if it hits your finger, it is your job to.

I got the idea, that it is not. If calibration transforms are used as they are supposed to be used, then it’s up to the quality of the equiptment. If they are not, then what you get is incorect ACES. I liked the idea of existing workflows to do what I want to do, but it seems it’s meant for equiptment that I do not have access to anyway. :confused:

If you make all the necessary measurements, then yes, you can get something objectively trustworthy out of your camera by using an IDCT. But still, there is no such thing as an incorrect ACES. ACES is not meant for science where absolutes can be pinned down. It is for visual image data that in the end will be viewed by human eyes, and as such there is no right or wrong per se. If I draw a 1m line with crayons, it can be straight enough for every viewer, but you won’t build a ruler based on it or start building a rocket using it as an etalon.

A simple DIY solution would, I imagine, be to make a small ambient light box with uniform lighting and photograph the surface you want to get albedo for, at 90 degree angle. To get an actual percentage out of it you need to probably use a color sample with known reflectance value. Using that you can calibrate your image somewhat.

That’s what I am trying to do. I just want 90 degree reflectance, I would work on the BRDF from there. I take photographs using natural uniform light measuring it’s white balance. My idea is to have a range of known values for reference. I am not buying a spectrophotometer, but my research shows that scanner calibration targets come with a range of individually measured values.

Well yes sure, I understand in practice it’s different from theory. But we are talking about the theory, so lets assume we want the standards to be used with as accurate data as possible as they were meant to be used. And as far as I understand ACES was meant to have scene reffered values in it in theory and device calibration was meant to be used to atchieve that. I don’t know. It doesn’t matter. I think the important thing is the reference samples with measured LRVs and a way to calibrate to them. I just thought that working in ACES would have some workflows to do that like for example a way to create those IDCTs.