sRGB is a specification with quite a few facets to it. When most folks use the term “sRGB” they are in fact referring to some specific component of it. This post over at Colour-Science.org does a great job of summing up the loose application of terminology.
Specifically with regards to the term Andrew used, and the way he simplified things for his audience, he is referring to utilising the sRGB transfer function or EOTF as a scene referred to display referred transform. In that context, the statement is correct.
In terms of the colour primaries, Filmic also uses the BT.709 primaries which sRGB uses. That is, the colours are correct on most typical displays. Without a proper transform, the colours would be incorrect when viewed on a newer Apple P3 or any other wider gamut display.
There is a nuanced detail in here worth exploring.
The reason is that in principle, assuming all light ratios in a scene are identical, yes, the results from a raytracer will be identical at any exposure level within the quantisation limits of bit depth. Practically however this isn’t typical in a modelling / lighting scenario. Consider for example the Agent project at the institute. Here the folks are lighting their scene based on the feedback from the view transform.
If the view transform results in harsh burn outs and odd saturations of colours, the person doing the work will light the scene differently. This may include cheating lights or adjusting the ratios. If we compare this result against the work someone does against a wider view transform, the ratios stand to be far wider and different between the lights. As a result, the raytracer is fed different data, and the outputs change.
If a scene is lit solely with an HDRI, and assuming the values are within reasonable quantisation limits of 32 bit float, one wouldn’t need to re-render if they rendered to an EXR. If they lit the scene by hand based on the feedback from the view, there is a very strong chance that they would re-light the scene differently under a more appropriate view transform. Alex Fry covers this in a wonderful SIGGRAPH presentation, with examples. While his presentation is specifically on ACES, the principles and examples apply.
Indeed it is just a transform. But there are a few key things that happen when you perform an operation on the view as opposed to within a node chain. For starters, it keeps your scene referred linear data scene referred linear. That’s important for compositing manipulations such as blending, blurring, painting, etc. Second, by keeping those scene referred linear values intact, it makes mastering much easier for alternate outputs. It also helps in providing a colour managed system for alternate display outputs. That becomes quite a challenge the moment you put a nonlinear transformation in and continue bending the data.
Also note that no tonemapper can deal with the Notorious Six without further compositing manipulation. That is, albedos missing one or two compliments. Albedos of R, G, B, R+G, R+B, B+G will all break.
It would be speculation to suggest a majority of renders end up in the display referred domain, but I’ll go out on a limb and suggest that they do. In the display referred domain, those transforms do have an impact on the final output and the ratios of pixel values.
Are you certain about that?
The formulas for the ASC-CDL as given in Blender are located in the file:
https://git.blender.org/gitweb/gitweb.cgi/blender.git/blob_plain/HEAD:/source/blender/compositor/operations/COM_ColorBalanceASCCDLOperation.cpp
The formula for the Lift, Gamma, Gain operation are located in the file:
https://git.blender.org/gitweb/gitweb.cgi/blender.git/blob_plain/HEAD:/source/blender/compositor/operations/COM_ColorBalanceLGGOperation.cpp
Look specifically at how each would handle scene referred data. One might also want to read Josh Pines’ comments at this link.
Further, there are other good reasons for endorsing an ASC-CDL approach, not the least of which is adding your own custom looks via CDL values into the OpenColorIO configuration. This allows one to use and reuse base grades simply by flipping to a customized look one has created. If one is lighting a nightime scene with a series of shots, they are able to use the same look they created as an entry point across each of the shots. Etc.
Again, I’d encourage one to look at the formulas behind some of the blend modes.
One can find the canonized formulas in the Adobe PDF specification on page 324. Check the formulas against those in Blender, and then determine whether or not they would work on scene referred data. In particular, consider how some of the functions perform with the value 1.0 in them. Blender StackExchange has a decent answer that highlights the issues in using display referred formula in a scene referred pipeline.
In the video, Andrew misspoke and suggested Multiply was broken. It is one of the few blend modes that, at last testing, worked. But always test.
I’ll agree with that. And keep it scene linear, not display linear.