This thread was opened with a specific focus on some of the more “spooky” concepts that many folks might shy away from. Frankly, if someone gets spooked out over threads, then it is quite likely that they should stick to defaults and have fun with it.
Again though, the concepts that Filmic leverages are already present in Blender. The issues with curves, blend modes, etc. have all been there for a long time.
Hopefully Filmic helps folks to engage with the concepts in a more tangible way, and as a result, elevate their understanding through experimentation.
Anyone that equates scene referred versus display referred, how view transforms interact with the data, why using a view is solid foundational knowledge, etc. with quantum mechanics isn’t going to make it very far in imaging. That is, the concepts aren’t nearly as complex as quantum mechanics. There are some questions that pop up that dive deeper into subject matter, but even then, if built atop of solid core understanding, I am confident there isn’t a single person in this entire forum that wouldn’t be able to understand them.
The core low level fundamentals are all exceptionally clear and exceptionally simple to grasp. The issue is, in my experience, accumulated cruft with broken mental models from years of usage in alternate software / mental models. That’s it! If one manages to tear down the broken ideas, the rest comes very easily.
Don’t ever be afraid to ask.
Some applications are crap. That’s the bottom line. Photoshop is one serious archaic piece of imaging software and it is always going to be a nightmare with regards to (ironically) scene referred photographic images. On the other hand, Resolve is an extremely capable application, and should be relatively easy to integrate with view transforms such as Filmic or others.
No. There are an infinite number of logs out there, and each requires very unique transforms to bend values back into the scene referred domain correctly. Their encoded state is unique to their particular needs, and there is no way to assert that any particular log will align values nor the range of values required. In particular, there are three SLogs, and their ranges don’t align with FIlmic’s ranges.
In your case however, you already have an image with typical scene referred light levels in an EXR. That means you are good to go in terms of the data, and all that you require is a means to apply the view transform in the other application. Without knowing more about the application in question, I can’t offer you too much help. For Agent, the colorist used Resolve and was having issues with the EXR pipeline. As a result, I suggested to use TIFFs at 16 bit, and handed them a properly formatted LUT for use in Resolve that delivered a perfect 1:1 off of the Filmic Base Log Encoding. This worked out well for the contexts at hand.
TL; DR: I would need to know more specifics about the application you are attempting to utilise Filmic within. Some smaller studios have adapted the OCIO configuration for use in VRay, Nuke, and Houdini to name a few.
Yes it is. Not at the higher level, but at the lower level of the encoded values. I formatted the original layout to be a base log encode with pairings of the contrasts. This permits a pixel pusher to use a variety of different applications, including those that are strictly display referred.
That pairing of the transforms however, means that the encoded values from the log, when rolled through the desaturation and contrast, won’t match what someone sees in Blender or some other OCIO enabled application. The variance could be quite extreme.
There is a pretty solid answer over at the BSE, that was largely a gathering up of information on the part of Cegaton. I’d encourage you to read it as it has quite a bit of information in it that you will likely find very useful. While it is technically under my handle, that answer couldn’t have been possible without Cegaton.
The TL;DR version here is that slope is a basic multiplier. That is, if you think of scene referred exposure values as being multipliers, you can quickly see that Slope, when used alone, is in fact perfectly analogous to exposure! Try matching up some values with the exposure setting in the Color Management or Film panel, and test the results. You should be able to get 1:1 values with both Slope or exposure.
How you use it is up to you. It is certainly handy via the False Colour view to set exposure values in data, or one could use the FIlm panel to bake the results into the actual values. When used in conjunction with the Power and Offset controls however, it is a much more powerful tool.
Great question.
In fact, the above discussion about using the CDL in an OCIO configuration is specific to certain contexts. You can quickly and easily use the CDL within Blender via the Color Balance node.
Note that within Blender, it is fundamentally impossible to apply the CDL onto anything but the scene referred data. It is sadly a bit of an oversight regarding needs. One can certainly try to mangle up the scene referred data and apply some horrible combination of math nodes, a power function, etc. and then apply the CDL, but that is no replacement for having a perfect 1:1 via a colour transform node. The priority should be an OpenColorIO transform node to solve every issue immediately, with longer term goals featuring tighter UI integration.
Deeper into your question, which is an excellent one and I hope more folks get to the point where the question makes sense, I would highly encourage you to read the above example of creating a yellow cast on an image. It will reveal how the order of operations has a significant impact on how transforms impact imagery in their particular states.
Always remember that transforms are radically different based on their order.
You could, but by and large that would be rather self-defeating on a number of levels.
Scene referred data is by far the most ideal and forward looking approach to manipulations. Trying to stay there for as long as possible is a prudent suggestion. It will keep things much simpler for moving between applications, and certainly make mastering and encoding to final deliveries much simpler.
There is one particular edge case where one would require a quick trip into the post-view transformed data state, but even then, that can be applied on scene referred ratios. That edge case is outlined above, and sadly there is no easy way to achieve it without an OCIO node in Blender. Hopefully more folks with solid questions such as these will help to put the pressure on it as a significantly needed feature.
Great stuff. Keep those questions flowing. You help not only myself, but also plenty of other folks.