Feedback / Development: Filmic, Baby Step to a V2?

if you allow things to be completely dynamic, that changes a lot what’s possible…

I think I still have some problem to figure out with my approach when using anything other than the channel average: It’s not quite true, that turning off the sigmoid and log turns off everything at the moment. My choice of luminance actually still affects things then. If I choose the channel average, I get precisely the same thing out as I put in, but if I use anything else, the colors end up shifting. That’s where the Too Green from OKL³ and, even more so, Y come from.
I’d expect not to change any color if no tonemapping occurs but currently that’s only true for one particular choice of setting… hmm…

I do know an easy way to undo that but that involves simply getting rid of the correction factor that previously fixed Matas.
There is one parameter I could fiddle with that might hold the answer, and that’s the direction vector in the derivative term. Maybe a smarter choice there could fix it

About the saturated look from the bt2020 exr.
Interesting if load a image from the folder,you can see how the bt2020 looks less saturated vs the bt709

checker_sat_folder

Anyways,i expect after colorspace transform the bt2020 to bt709 to have the same image as the bt709 image exr.

that’s most likely because the image viewer doesn’t do color management and simply assumes that the BT.2020 one should still be displayed in BT.709

Yes that might be the reason.I still wonder if the bt709 exr contains the right amount of reflection/albedo data,why should look the bt2020 exr image different,assuming the same measured color values are stored?
I mean they can not be higher if the bt709 containing allready the measured values?
The only reason i can think of is that the CST does something to the color data which should not happen.Or the bt2020 have higher/different values stored.Or both cases.

Oh that’s simple. Consider pure red in Rec.709. It has the coordinates 1 0 0
In Rec.2020 not only will green and blue contain something so that’ll be greater than 0, more importantly, the red coordinate will no longer be as bright overall. That coordinate will be lower than 1 now.
This is perfectly normal and expected.

I figured out now, that I can get the exact same look with tonemapping off, if I pick white (1 1 1) as my direction vector. This comes at the cost of lower saturation post tonemapping overall though

I made a test.I set the bt2020 image data to non color,and at the end CST from bt2020 to sRGB.

vs the bt709

its seems to work,this is what i mean.

I knew that’s what you meant and I was confident in this test working exactly like that. There is no luminance difference. But if you render BT.2020 images as if they were BT.709, the apparent luminance decreases along with the apparent saturation because the maximal channel is gonna end up lower

I now set it up such that without tonemapping there is no visible difference what so ever. (using white as my tangential direction)
Doing so leads to only very very subtle differences with the transform:

OKL³

average:

You’ll probably have to flick back and forth to even realize the differences. Overall, saturation just shifts around a little bit, some things end up more saturated this way, others less. Mostly, blues and reds come out more saturated, greens less. Which makes sense: The “brighter” stuff is gonna get more aggressively reduced by the applied curves.

OKL has darker blue and different red.

I made the same test,with the bt2020 film checkerpad image.

And Aces2065

without sigmoid the bright display is clipping,the colors from the checker pad looking ok to me

This is a pictorial depiction of a colour checker. It is not a colour checker. Try to focus on what I’m attempting to draw attention to, not one’s physicalist idea of what some object “is”.

Be careful. OK* is garbage. I say garbage here because the neurophysiological signals transduce the external stimuli, and somewhere in that, there’s likely (must) be an energy conservation. OK* nonsense is just a pile of rubbish luminance mapping crap model, and as such, it will break on the example format I outlined. Better to ditch garbage outright.

This will continue to toilet bowl around missing the point if the same incorrect language insists on being reused. There is no “highlight roll off". It’s flatly a bogus construct that avoids the incredibly valuable insight in all of this; how we arrange the energy in the picture, and subsequent cognitive decomposition.

Keep focus on the relational gradients of the wattage. The neurophysiological signals are not the wattage of the stimuli, but we can at least begin to see some broad patterns emerge.

This has nothing to do with “clipping” etc. Again, I stress, our cognition has not agency to understand a ‘clip’. All we have is the stimuli of the pictorial depiction, and the subsequent cognitive decomposition.

There’s a reason that @MrLixm arranged his test the way he did. Pay attention to the region we read as “diffuse glare” in the pictorial depictions. There is a hell of a lot of insight to be found in his demonstration.

Perhaps @MrLixm will outline his observations and reasoning to make it more clear.

I’m guessing by the transduction you mean the cube root in the process? That’s why I’m cubing after. It brings OKL and Y in very close alignment for most colors outside the blue range.

Transduction is the literal transformation of the proximal stimuli under a human projected model of energy (wattage) to the electrostatic projected model of energy.

Remove the garbage. Just use plain wattage. Hell just take the thing into a manual picture editor and push the wattages around and see what cognitions emerge.

Burying any demonstration under garbage models and bogus conceptual frameworks will likely distance the work further from broader insights. OK* is absolute bullshit, so there’s no reason to go anywhere near it, and it will stack any conjecture / reasoning / conclusion under thick layers of said bullshit.

I cannot stress this enough; OK* is pure and utter nonsense, so stay as far away from it in these sorts of explorations. Focusing on wattage, and chrominance-luminance relations is going to be closer to the bare metal.

what do you mean by wattage in terms of the units to use? Just Y? The maximum value (aka the Value)?

We don’t need to conflate the surface of the problem too much because in most cases, our range of stimuli wattage can be sRGB / BT.709.

All RGB and CIE XYZ is ultimately “energy”, in terms of wattage units. The CIE system smashes an input wattage into three dimensions, each uniform with respect to wattage. CIE XYZ is relative wattage as a stimuli specification.

EG: Punching in code value 0.0 0.5 0.0 is a normalized and relative encoded wattage, which is “decoded” by the display’s EOTF (commonly a pure 2.2 power function) per channel, to a relative wattage, and then gained by the underlying hardware. The quantity in watts comes from that value.

CIE XYZ subdivides that wattage into useful units, where Y is relative luminance units. The other two are incredibly important, as they form a chrominance vector of sorts. Note this is not yet colour, as the colour is a cognitive computation. “Lighter” and “darker” is not and cannot rest in the encoded signal wattages. It is a cognitive piece of information, based on the decomposition that happens at our cognitive level. IE: How we decompose the forms presented in the pictorial depictions is what leads to our analysis of “What colour is what” in the relational sense.

ok got it, that makes sense. For the purpose of my question you want me to use Y.
(I only need a specific way to determine a closest achromatic analog for my colors. That’s literally all I’m currently using OKL³ for at the moment. The ab parts don’t even come into play at all)
It’s easy enough to switch to Y. The results are very nearly identical though, other than for blue.

I was searching in the net what

means, and i found a very interesting paper

Speaking to the accuracy alone, I tested it here – yeah, it’s accurate to the hexes in the Wikipedia article. My intent was to have a traditional MacBeth Chart material that didn’t rely on an image texture (adds bulk when packed, easy to lose when not). I’m not gonna address the utility of a 1974 chart (with colors chosen for the long-term physical stability of the inks used) in digital rendering fifty years later.

2 Likes

The commenters on this thread don’t believe in hex values for color, just a heads-up, so you may be in for a barrage of (unwarranted) criticism

3 Likes

I have made two new exrs for testing of extreme but still valid colors:

https://www.dropbox.com/scl/fi/iwf37xjryn7lth402o16f/spectral_colors.exr?rlkey=evr822rpw4qdpbcqz5i20xiv0&dl=0
https://www.dropbox.com/scl/fi/hx53eg3usrkvcf0tqt4q1/spectral_colors_normalized.exr?rlkey=gozvzyl3owasm5kejzx9ickfl&dl=0

Two versions of the spectral locus of colors across a vast range of exposures (from -15 to 15, logarithmically)

the first is taking the spectral locus directly so the relative luminance of neighbouring wavelengths is as given by the color matching functions, the second is normalized to constant Y along any given row of pixels so it’s much brighter, especially for the very extreme outer fringes.

Note that this follows the CIE 1931 2° observer data which, on one hand, is woefully outdated and has some really strange nonsense going on in the far-violet, but on the other is what sRGB is built upon.

and how AgX is doing:

AgX is just doing really well with this.
I can get significantly closer to AgX’s performance if I simply clip anything that’s negative according to BT.2020, though doing so means I also lose the hue. I need a better way to project onto a color space.

At the moment I can basically choose whether very extreme blues and violets break, or whether very exteme green and teals do.

They exist. :laughing:

The link from here to my topic popped up in my notifications, along with a question about it’s accuracy – when I realized what it was being considered for I wanted to make clear that it was accurate to the data I used to make it (hexcodes copied’n’pasted into a ColorRamp, over and over and over and . . . ). Sufficient to my needs, anybody else who finds it useful feel free, if not then not.

1 Like