This is an absolutely wonderful bird’s eye / forest through trees video on pixel management.
I’d encourage every single pixel pusher who cares about their work to watch it.
This is an absolutely wonderful bird’s eye / forest through trees video on pixel management.
I’d encourage every single pixel pusher who cares about their work to watch it.
Thanks for posting!
Good to see colour management in a nutshell. That helps us prevent from wander to other regions.
The only thing I wonder about, in the third part, he is using ACES in the pipeline. I don’t know how to relate that to a pipeline in Blender. I think as follows:
Filmic is a view transform so we can see the high dynamic range of the scene onto our display without clipping and to much color-skew. But we are not going to save that what we see on screen (display referred), because that is for us (digital blender artists) to get an understanding of what is going on in the scene.
So we save our scene, in a “scene referred format”, for example .exr (where no view transform is applied). Then we feed that in software (that has proper colour and pixel management) and then somehow we keep our scene referred data (.exr) and from there we can produce mixes for several kind of devices. (Much like audio-mixers do: for disco, TV, cinema, radio).
Or is it also an option to save what we see on the screen in Blender save it (display referred), and apply an S-curve (so, after the Filmic view transform) as shown in the video. (For example for YouTube-clips).
How would practically, in short, such a pipeline look like for Blender?
I wouldn’t try to explain ACES in a single post, as it’s a full protocol, file format encoding, etc.
With that said, we can revisit what the CIE defines as an RGB colourspace:
So ACES, just like Filmic, defines each of these. In the case of ACES, it actually defines several of each of these, because again, it’s an entire protocol sandwich. With that said:
To implement a different pipeline in Blender to support a more 2019-ish approach to looking towards the future:
What would that look like to an image maker? Some things that would be different:
In terms of software it means bringing Blender along to support the ability to render the reference space and interact with it, despite not necessarily being at an endpoint that they can see. That is, if you think about mastering for HDR with wide gamut, you could feasibly make content in a lower gamut, lower dynamic range context, and still see that content rendered absolutely correctly for that endpoint, while at the same time successfully generate content for a higher quality endpoint.
Hope that begins to answer some of the questions…