Blender Filmic

You can google albedo or BRDF measurement, you’ll get an idea of the methods and devices used. You basically need a known light source and calibrated camera or some other capture device, plus the angle of falling and reflected light. More precise BRDF measurements give the reflectance values over the whole range of angles (for each set of falling-reflecting angles). A simple DIY solution would, I imagine, be to make a small ambient light box with uniform lighting and photograph the surface you want to get albedo for, at 90 degree angle. To get an actual percentage out of it you need to probably use a color sample with known reflectance value. Using that you can calibrate your image somewhat.

ACES is a universal reference space, so yes, all source data is converted to it and thus unified as much as possible. But what the data actually means is up to the user. If your data has the property of having linear relation to scene light proportions, it is scene linear. But the fact itself that something is in ACES gamut does not automatically mean that it is scene linear.

IDCT is for unifying different practical devices (that have slight deviations due to manufacturing etc) in preparation for IDT. Nothing more, nothing less. Whether the data you move around is more or less scene referred or not, is not related to what the transforms are meant to achieve. It is usually taken that raw sensor data is approximately scene-linear, but it is approximately true (pun intended) for only a range of values.

Just as a hammer works either you smash your finger or nail, color transforms work independent of the meaning you attribute to the data you are pushing around. This is why it is notorioulsly difficult to achieve an actually numerically meaningful and precise color reproduction, the whole color management can do nothing if you don’t have a very good grip on what the data you are working with actually expresses. The CM does not know that and actually does not care. Referring to hammer analogy once again, hammer doesn’t give a duck if it hits your finger, it is your job to.

I got the idea, that it is not. If calibration transforms are used as they are supposed to be used, then it’s up to the quality of the equiptment. If they are not, then what you get is incorect ACES. I liked the idea of existing workflows to do what I want to do, but it seems it’s meant for equiptment that I do not have access to anyway. :confused:

If you make all the necessary measurements, then yes, you can get something objectively trustworthy out of your camera by using an IDCT. But still, there is no such thing as an incorrect ACES. ACES is not meant for science where absolutes can be pinned down. It is for visual image data that in the end will be viewed by human eyes, and as such there is no right or wrong per se. If I draw a 1m line with crayons, it can be straight enough for every viewer, but you won’t build a ruler based on it or start building a rocket using it as an etalon.

A simple DIY solution would, I imagine, be to make a small ambient light box with uniform lighting and photograph the surface you want to get albedo for, at 90 degree angle. To get an actual percentage out of it you need to probably use a color sample with known reflectance value. Using that you can calibrate your image somewhat.

That’s what I am trying to do. I just want 90 degree reflectance, I would work on the BRDF from there. I take photographs using natural uniform light measuring it’s white balance. My idea is to have a range of known values for reference. I am not buying a spectrophotometer, but my research shows that scanner calibration targets come with a range of individually measured values.

Well yes sure, I understand in practice it’s different from theory. But we are talking about the theory, so lets assume we want the standards to be used with as accurate data as possible as they were meant to be used. And as far as I understand ACES was meant to have scene reffered values in it in theory and device calibration was meant to be used to atchieve that. I don’t know. It doesn’t matter. I think the important thing is the reference samples with measured LRVs and a way to calibrate to them. I just thought that working in ACES would have some workflows to do that like for example a way to create those IDCTs.

Well this thread dove deep quickly.

Incorrect.

Reference spaces, when used in CGI or image manipulation, yield different results. Not different results in the sense that one is a smaller gamut and another is a wider gamut, but rather two identical colours when multiplied, will yield different results in different reference space primaries. It is precisely this reason that AP1 exists in the first place. XYZ is entirely unsuitable for rendering or manipulation work.

This link dives deeply into the subject including the original thread with some rock star imaging peeps.

Incorrect. A camera simply has three photosite receptors, and those sensors are covered in coloured filters. Cameras are gamut bound based on those three filters.

There are some low level issues with cameras capturing certain colours that can yield crazy results when transformed, but as above, they are limited just as a physical display is.

You can only measure the amount of light coming off of a material using typically available tools. As stated previously, it is a complexly baked series of things however, not a perfect albedo value.

You can convert any camera’s colour space to any other. ACES isn’t magical here, it simply provides the protocol. You could take a shot and convert to Blender’s / Filmic’s reference primaries easily. No magic. Just need the proper transform.

If you are interested, it might be worth another thread or BSE question.

ACES AP0 is equally useless for any rendering and manipulation work, AP1 is meant for that, but there is zero benefit from having AP0 as base instead of XYZ. We could just as easily go from XYZ to AP1 or build a RRT on top of XYZ. Actually XYZ is more efficient storage wise as real colors contain bigger volume in its unit cube. Additional benefit would be the fact that XYZ gamut is already widely used in cinema distribution masters.

Camera color filters and RGB image primaries are not the same thing. If we traverse the visible wavelengths, we get differentiable respose from sensor, meaning we can discriminate all colors inside spectral locus. Problem is accuracy, not ability. Gamut clipping would imply that out of gamut colors would plateau one or two channels while third is on some floor level. This is not how sensors behave, color filters have continuous transmission profiles with peaks around respected wavelengths. Information comes from Charles Poynton who is very well versed in both digital and physical realm.

Maybe we should make a separate color science related thread, just as troy_s suggested? I personally find color stuff very interesting and would like to keep discussions like this going without pulling Filmic thread off topic.

I second this. Having another thread or post dedicated to the deeper aspects of color would be very useful for those who want to explore more, without confusing people who are just jumping into Filmic and learning the basics. And while extremely helpful, the rabbit hole as it is now takes a long time to go through to find the useful info.

I know I have lots of questions, but they aren’t appropriate for a thread such as this.

Removed post.
Reason; Not contributing to an understanding of how Filmic works

Moved thread.

You can find the new thread here.

Please ask them.

I have created 2 node groups that recreate the filmic blender color spaces in the blender compositor.
Filmic Nodes.blend (634 KB)
The contrast node is able to blend between the contrasts troy has designed.
Because the nodes recreate the color spaces, color management need to be set to linear raw/non-color data so transforms are not applied twice.
Please experiment and compare with the filmic color management setup and ask me any questions you have about the node groups

Not terribly useful as far as I can tell. A few points:

  • Not invertible.
  • Doesn’t appear to be a match to the log encoding.
  • Not a match on the desaturation.
  • Not suitable for maniplulation as it would take data to the nonlinear domain.
  • Etc.

I guess I am missing the point.

The only appropriate method to achieve a transform would be an OCIO node, which Blender needs but doesn’t yet have.

I don’t see why you would need nor want to invert the transform, but will look into a method.
Using a linear gradient and the histogram, I matched up the curves to those of and image with the filmic transforms. They are not an exact match but are very close.
The nodes would be put after any linear domain effects e.g. Glare. It was mentioned that color grading should ideally be done as a second pass after the footage/image has been put through the color transforms, the node groups eliminates the need for a second pass. It may also be used if the user would like a solid color background, white for example, under an image with an alpha background, which would be a light grey when put through the filmic transforms. If the background were set to white after the transforms, it would not be turned to grey and would be the correct value. The nodes again eliminate the need for a second pass to achieve this.
I will also look into the desaturation of highlights.

Thank you for you quick and helpful feedback

Glad my response didn’t scare you off.

I believe your scene referred values were off, but I would require the original file reposted to validate that.

If one were grading, the only real method to accomplish this is via an OCIO node.

Incorrect.

This is a misunderstanding of scene referred values versus display referred. White, black, etc. don’t exist until the display referred transform, and in the above case, you are using incorrect scene referred values to achieve display referred white.

I have made a mistake with the node setup in the blend file. the nodes should be swapped around to get correct values.
[ATTACH=CONFIG]488606[/ATTACH][ATTACH=CONFIG]488607[/ATTACH]
As you can see, the histogram curve is near identical to that of the filmic color management.
An OCIO node is not needed if the LUT can be expressed with other nodes, such as this.
It may be cheating and not physically accurate to bypass the filmic transforms to get a specific color, but in many situations an artist may want a specific color as a background, and having everything go through the filmic transforms creates awkward situations that can be avoided with the use of a node group that can be bypassed if the artist so wished. Something like

for example

This is simply false, and Filmic Mark II will be a good example as to why.

Colour transformations must be entirely precise and invertible. If this seems confusing to you, I would suggest reading the PDF at http://cinematiccolour.com

Does anyone have any idea why False Color hasn’t been implemented in the 2.79 Branch ?

it is! it’s not a “look” but a “view”

Indeed ! But It made more sense to have it in look since it only shows when the image gets burnt with filmic. I wonder why they put it here.