Sorry in advance for the long post. Starting to think I should write a blog entry about my current understanding/mental model of color science, because there’s a lot I don’t think I can reasonably cover here. (And, specifically, I would love for you to comb over such an entry, Troy, to help tease out areas I may be misunderstanding still.)
No worries! It happens to the best of us. You should see me get riled up about misconceptions around quaternion rotations (they’re not normalized axis-angle, people!).
While LMS as they exist in our eyes are indeed the receptors, that isn’t how XYZ is modelled.
Right, that’s exactly what I meant when I said that XYZ doesn’t directly represent the responses of our cones.
That is, the CIE 1931 outlines the approach based on Wright and Guild’s famous experiment, that was indeed with three lights. The CIE generated the Colour Matching Functions off of that data, and then modified the data according to a few specifications, including making the XYZ non-negative. It literally is the modified CIE RGB light data, plus some calculus to bend the basis vectors around luminous flux IIUC.
That’s my understanding as well. I think I’m just getting hung up on the “three lights” thing. I’m taking the word “light” too literally. The experiments were indeed conducted with three lights, but that doesn’t make the color space representable with three actual physical lights (I know you know this, I’m just stating it again for continuity). So from my perspective, it’s just an abstract representation of human color response, because it corresponds neither to physical light nor directly to the cone sensitivities of the retina.
The LMS domain you are likely referring to in relation to XYZ is the spectrally sharpened positions for chromatic adaptation.
Well, yes and no. When I was referring to LMS before, I was under the wildly incorrect mis-impression that the way these models developed were Wright and Guild → LMS spectral responses → XYZ. But as you rightly corrected me on, that absolutely isn’t the case. So the model I was thinking of doesn’t exist as such. However, a bit of googling reveals that there are models that attempt to represent the actual spectral sensitivities of cones in humans. So that’s what I would have been referring to if I wasn’t so foolish.
Well… so this is interesting: you actually can realize lights with emission spectrums that match the spectral curves of X, Y, or Z. But (as I understand it) it wouldn’t be especially useful to do so.
I would make a point that it is indeed “especially useful” to do so given that every RGB implementation is mounted directly on top of that XYZ model. Use a MacBook or watching HDR television? You are using that useful model!
I think this is probably the main point upon which our respective mental models diverge. I don’t think they necessarily conflict, but it’s a different spin on things.
I view XYZ as being one representation of a human colorimetric observer. LMS and CIE RGB are also representations of the same, and though they are encoded (for lack of a better word) differently they are equivalent. You can define any of the RGB color spaces using any of those models, because they are actually the same model.
So from my perspective, although pretty much every other color space is specified in terms of XYZ (or usually xyY, IIRC, but that’s a minor point), their meaning actually comes from the model of an “average” human colorimetric observer, independent of the space (XYZ, CIE RGB, LMS, or otherwise) used to represent it. If that makes sense.
In some instances, even imaginary lights are useful to use them as such. A case in point might be the linear ACECcg under AP1 combination. AP1 are the recommended manipulation lights for ACES, and indeed one is imaginary!
Absolutely. Again, I think my perspective is just a different way of looking at the same thing. I see both AP0 and AP1 as belonging to the class of “abstract models”, much like I see XYZ. That doesn’t, of course, mean that AP1 isn’t useful for color manipulation. It obviously is!
If you follow Colour Science or such, you’ll see that a few of us have been following spectral upsampling for quite some time. That said, spectral upsampling isn’t the issue here. The XYZ model is indeed essentially a three light model underneath the data.
Ah, sorry, when I suggested the paper again, I was jumping topics. Specifically, I was referring again to RGB rendering just always being wrong from a color standpoint.
RGB spaces–or any color space defined in terms of human color vision–will be useful to greater or lesser degrees for manually manipulating colors, because those manipulations are also in terms of human color vision. But when you step into the realm of light transport, using those same models for lighting calculations starts to break down, because that’s not how light works.
You can argue that for RGB color spaces with physically realizable primaries, you’re treating light as a combination of three box functions on the light spectrum. And that is technically physically plausible for rendering, even though it’s very far from how light spectrums in the real world look. But then for spaces like ACES AP0 and AP1, they’re just not even physically possible.
Now, that doesn’t mean that RGB rendering isn’t useful. It obviously is. But from a color standpoint it puts you in a world where there often isn’t a “correct” thing to do with the resulting renders any more, there’s just “correct-ish”.
Now granted that I am not in any way a colour scientist, and therefore can’t speak to the various nuances of colour science any more than I already understand,
That makes two of us!
I can speak to attempting to educate folks who push pixels. Under this lens, I can say beyond a doubt that helping pixel pushers understand what they are doing is aided tremendously by reducing the model to a three light model.
[…]
That all absolutely makes sense to me. I’m all for mental models that help people understand things “well enough”. I don’t think artists generally should need to have a deep grasp of color science (any more than quaternion rotations). But they do need a mental model that’s consistent with how it practically works in their domain of usage. So… yeah, high five!