Can’t be done, nor do you likely want this for many reasons that are probably not easy to list here.
If folks convince the developers to add an OCIOTransform node, more complex transforms such as this would be feasible, but it currently isn’t on the roadmap.
One from me: I like to use filmic as it keeps highlights under control. But in my scenes I may have computer monitors that are to display important content depending on customer. These would be screenshots in whatever format. If I add them as an emission shader (ignoring the glossy on top), the content image is affected by filmic, and the result is not 100% true to the image. Even if I under or overexpose severely (I can correct for that), the next guy in the chain still want the content to be identical to the source, no matter how utterly wrong it is.
Imagine content GUI elements of light grey and slightly more light grey; separated perfectly in source material, but not when using filmic where things may blend out. Our solution; render out with a mask and paste it in later in Photoshop.
It’s apples and oranges; once you create something for the display referred domain, you don’t go back. Hence looping things back in is the wrong approach.
It’s wrong because the content is already encoded into an incompatible state. If the need is to use emissions, the file encoding needs to support the original state, which would be EXR.
Looping a display referred “ready for viewing” image back into a light transport system is simply impossible, and somewhat of an erroneous entry point[1].
The more appropriate method to composite scene referred components is to keep the entire pipeline scene referred through compositing, and not trying to mix and match broken componentry, as it will cost a good deal of time and energy to get less than acceptable results.
[1] There are hacks that can work here, to varying degrees to cheat things as best as one can, as long as one understands the limits of the hack. They are certainly worth exploring.
Once again, developers seem to think pixel management is optional. As a result, Grease Pencil is a broken trash heap, with brand new code that is busted. This happens again and again, such as the broken code added to the Curves node for that garbage “film” curve.
This will keep happening forever until the overall comprehension level of the audience improves to prevent garbage code from becoming standard.
Short term solution would be to export your Grease Pencil frames as a TIFF series, and then load them again and set their colour space to “Filmic Log”. This should allow you to composite them as though the trash Grease Pencil broken crap is scene referred.
So frustrating having to solve problems that shouldn’t exist because developers are completely in the dark regarding pixel management.
It took ages, but even things like colored wireframes and left-click selection as default have made it into Blender. I remember how dead-set the devs. were in making sure those things never happen.
I wouldn’t say never on good pixel/color management code as a result.
EXR is a fantastic file encoding. Except in Blender they were unable to be properly managed up to about a month ago with Brecht’s fantastic effort.
Sadly, I don’t believe there is an easy way to treat the encoding as Filmic Log (incorrect, but a hack to work around GP’s brokenness here) from an EXR entry point. Try saving as a TIFF from the GP, and loading the file as Filmic Log.