Colour / Pixel Management, Ninja Theory Presentation

This is an absolutely wonderful bird’s eye / forest through trees video on pixel management.

I’d encourage every single pixel pusher who cares about their work to watch it.

3 Likes

Thanks for posting!

Good to see colour management in a nutshell. That helps us prevent from wander to other regions.
The only thing I wonder about, in the third part, he is using ACES in the pipeline. I don’t know how to relate that to a pipeline in Blender. I think as follows:

Filmic is a view transform so we can see the high dynamic range of the scene onto our display without clipping and to much color-skew. But we are not going to save that what we see on screen (display referred), because that is for us (digital blender artists) to get an understanding of what is going on in the scene.

So we save our scene, in a “scene referred format”, for example .exr (where no view transform is applied). Then we feed that in software (that has proper colour and pixel management) and then somehow we keep our scene referred data (.exr) and from there we can produce mixes for several kind of devices. (Much like audio-mixers do: for disco, TV, cinema, radio).

Or is it also an option to save what we see on the screen in Blender save it (display referred), and apply an S-curve (so, after the Filmic view transform) as shown in the video. (For example for YouTube-clips).

How would practically, in short, such a pipeline look like for Blender?

I wouldn’t try to explain ACES in a single post, as it’s a full protocol, file format encoding, etc.

With that said, we can revisit what the CIE defines as an RGB colourspace:

  1. A set of primaries for the reference space. This defines what colours the RGB, or “reddish”, “greenish”, and “blueish” coloured lights the reference space represents. In terms of path / ray tracing, this represents the colours of the three lights we are projecting into the scene for each pixel.
  2. A white point definition. This includes both the colour of the light that we are going to label as “white”, but also may give us an indication as to our peak luminance value of the volume.
  3. A set of transfer functions. This defines the mapping of our radiometric intensities to their nonlinear encoding.

So ACES, just like Filmic, defines each of these. In the case of ACES, it actually defines several of each of these, because again, it’s an entire protocol sandwich. With that said:

  1. ACES defines three lights that are quite different from Filmic. Filmic uses the well established REC.709 chromaticities to define the three beams of light fired into the scene, where ACES would use a different set of lights. Those different lights will not only yield a much wider gamut volume in the image, but also yield quite different results in some instances.
  2. ACES defines a different white point than Filmic. Filmic uses the well established REC.709 white point of D65, while ACES is essentially a D60 colour.
  3. ACES defines several quite different nonlinear encodings as a transfer function, including an entirely different processing chain to bake the internal reference scene referred ACES values to the nonlinearly encoded ACES formats. Filmic uses its own unique transfer function to map from the scene referred domain to the display referred, as well as its own unique processing for additional “sweeteners”.

To implement a different pipeline in Blender to support a more 2019-ish approach to looking towards the future:

  1. A wider set of primaries for the three basis lights.
  2. A different set of transfer functions to accommodate different output contexts such as HDR10 / Dolby Vision, as well as the typical encodings.

What would that look like to an image maker? Some things that would be different:

  1. They would have to consider generating their assets relative to the new light “paints” instead of the older REC.709 “light” paints. These “light paints” would be more saturated and different when compared to the REC.709 lights. I use the reference to paints here, because it’s easier for someone to understand that you’d have to mix the different sets of paints differently to achieve identical colours between the two sets. It’s very similar to thinking about reference space lights.
  2. They’d have to consider the destination contexts. For example, a huge number of displays support Display P3 as an output If you render and encode your new image in a wider gamut, and appropriately follow colour management conventions, this permits all of the people out there with wide gamut display devices, ranging from iPhones to Pixels, to see the image in all of its glory as opposed to the rather crunched down gamut of traditional sRGB work. For animations, it would open up even more elements such as the amazing high dynamic range displays.

In terms of software it means bringing Blender along to support the ability to render the reference space and interact with it, despite not necessarily being at an endpoint that they can see. That is, if you think about mastering for HDR with wide gamut, you could feasibly make content in a lower gamut, lower dynamic range context, and still see that content rendered absolutely correctly for that endpoint, while at the same time successfully generate content for a higher quality endpoint.

Hope that begins to answer some of the questions…

2 Likes