Dealing with Aces , AGX, Srgb

Here we go again.

What the Shevell and Knoblauch point is: Colour is not in the stimuli. It’s not in the signal. It’s a meatware computation.

I’m not sure why you are so hung up on this. If you are seeking wisdom, it’s wise to understand the fundamentals of how colour is constructed in the visual cognition system.

And chasing brand identity colours, designed by some dimwitted clown who doesn’t understand fuck all about the visual cognition system, is not the sort of thing that anyone thinks about when actually trying to solve the problems.

2 Likes

This entire thread is based upon the presumption that we’re dealing with the signal, and adjusting settings involved with that signal.

What happens in the brain and in the retina has very little practical application to what button someone needs to push in order to achieve a particular result. I’m not sure why you’re so hung up on that.

This Week, on Top Chef:

Does this food taste good?
No, needs more salt.
Are you sure?
Yes.
But how does the brain process what’s on the tongue? How are we sure it needs salt?
Please pack your knives and go.

3 Likes

So, would it streamline the conversations here if we only said ‘rgb stimulus’ and never said color again?

I’m assuming the vast majority of pixelpushers here take rgb stimulus arrays in the form of established ‘image’ formats, do stuff with them in blender and output them once again as rgb stimulus arrays. Many users are having difficulty getting output stimulus levels to match input stimulus levels.

I appreciate the intellectual rigor you’ve obviously devoted to this incredibly complex issue, but this ivory tower intellectual hurling down missives to the workers waste deep in the mud of actually getting shit done doesn’t actually help anyone who is trying to get shit done, nor does it help the intellectual (unless he’s just looking to boost his ego by being smarter than those dummies down there). In fact it’s almost exactly the type of thing that can drive people to aggressive anti-intellectualism.

If someone was having trouble tuning their banjo, would it be helpful for me to explain to them the specifics of Schrödinger’s equations and how really, nothing is ever ‘in tune’ as the harmonics of the environment inescapably alter the resonances of the entire system? Telling them that tuning is a lie invented by idiot plinkers doesn’t help anyone, even if it is technically true.

7 Likes

Unfortunately, the field of color study is crammed full of people who have no interest beyond proving their intellectual superiority to everyone else. It seems to be a field that lends itself well to smugness- it’s fairly esoteric, has complex jargon, deals with advanced math, and perhaps most importantly, lacks consensus on pretty much anything. It’s almost entirely theoretical.

Thus, you’re entirely right (assuming the point of this thread and the many others is to actually find the right way to do things), but that isn’t the point of this thread. The point is to show off. There’s no reason for this thread to be 700 replies long, or the other color threads to be 1500+ or more, until you realize the reason is to continually emphasize how smart some people are and how dumb everyone else is.

Meanwhile, the workers deep in the mud continue to press forward and do their work.

It doesn’t take much to see this- this thread is packed with more insults, profanity, condescension, and arrogance than any other thread on this site.

This is not the language of someone who gives a single darn about finding solutions, this is the language of someone with something to prove:

Etc.

And my personal favorite:

7 Likes

Is this roughly what you had in mind? I’m going with OKLab as a kind of stand-in for something Opponency-ish, and I’m using the simplest possible finite difference method to get my gradient. Both of those things could surely be improved. For one, OKLab isn’t even made for open domain data to begin with.

As far as I can tell it makes no sense to take the logarithm of the color directions (as they can explicitly go negative), so I’m only taking the logarithm in the L-coordinate, but the a and b coordinates still get their gradient applied.
I suppose if I converted to cylindrical coordinates, I could also take the logarithm in the colorfulness and just leave the color direction as sole linear part. But I suspect taking the logarithm in the L coordinate (or Y or whatever) is what you’re after anyways?

So in theory I’d now be at the stage of “modify gradient” and after that, the screened poisson integral is probably not reasonably doable within Blender’s compositor.


If I just undo the transform without doing anything else, I end up with:


(both together are the gradient taken in logarithmic opponency space but reprojected into RGB)

and the original image (formed via AgX) looks like

Obviously the goal here would be to use those gradients to somehow aid image formation rather than relying on AgX as it currently stands. So this is just for visualization purposes and most certainly unfinished.

I regret how much simpler it was to talk about gravity loop theory or string theory! It would certainly be easier to understand the 26 dimensions needed to make bosonic string theory work!
Ahhhhh those were good old times :grinning:

3 Likes

Color could be simple to talk about, but the experts in the field (at least on this forum) deliberately go out of their way to make it not so

4 Likes

My rig isn’t working, the skin is deforming in a bad way.
That isn’t skin.
What?
Those are just pixels.
Erm…
Until you understand you aren’t working with skin, you won’t be able to animate the character.

2 Likes

Wouldn’t call myself an expert. I’m trying to translate whenever I can but I don’t always follow either.

I don’t yet know what the end result of this current plan of Troy is supposed to be, as normally, color management only, effectively, maps data values to output colors, with no usage of positional or local information at all, so I’m really not sure how this would even be implemented. Seems less color management-y and more compositor-y to me.

As for language such as

I can only speak for myself and: You are right, I could definitely have said that more diplomatically, and I apologize for that. I’ll try to be more careful. I did explain in that post at length what I meant by that though. I don’t think that response was needlessly obtuse otherwise.

Ultimately the issue of complexity seems to come down to a mismatch between how color is encoded in a file, how it is produced on a screen, and how it ends up being perceived when hitting your retina (which comes with basically an infinite list of extra caveats like the viewing conditions or how well you can perceive color in the first place etc. etc. - though not all of those have been considered here)

There’s no end plan. It’s just showing off. Always has been.

I used to read over these discussions with great interest- about 3000 posts later, I got tired of being derided and mocked, and seeing other genuinely interested artists suffer the same. Why do you think participation in these threads always slowly drops off to just the same five-ish people? I can give you a hint, if you need it :wink:

Anyway, I’ve made my point, so y’all can go back to your discussion about color. Or “stimulus”, if you prefer

5 Likes

You’ve proved my point far more efficiently and concisely than I ever could, thanks :slight_smile:

7 Likes

You can use basic RGB opponency in the linear sense. Just as a start, try R-G, and (R+G)-B for the other angle.

That way it stays in the energy domain.

Then log. Then try a power function on the logged “dimensionless” ratios, and construct back.

For the sake of argument, I’ll agree that’s completely true. Also, building such tools and systems and algorithms is not what we’re doing here.

A room full of people at the Academy came up with ACES, which you think is completely idiotic, worthless, useless, and purely garbage. Which may be true, but I don’t think a random sampling of people on blender artists is going to accomplish something that the Academy has not.

When we are speaking of “depicting something in a picture”, I would suggest it is.

As much as folks like to say silly things like “It’s simple” and “We can talk about colour really simply but we aren’t in this thread”, they are missing the point.

When we form a picture from the energy data, there are general “rules” that are required. For example, when we think about “exposure” we multiply the values in the dataset. But does it not seem peculiar that in terms of the pictorial depiction, the values are in fact being offset? This literally never happens in generic ecological cognition, yet in a pictorial depiction, it is a mandatory mechanism.

When we suggest handling Brand Identity Colours as stimuli, we actually break the pictorial depiction because of this. Coca Cola “red”, when expressed as stimuli disconnected from the embedded field, can be computed as a different colour, or form boundary conditions where none are predicted, or suddenly appear emissive or otherwise.

For example, watch these two sweeps closely. These were authored by someone who followed up on some of the discussion here.

What we can see is that in one of the demonstrations, there’s a cognitive fission “tear” in the depiction of the translucent “cup”. This is literally introduced by the picture formation chain, and the chase for “higher purity” thinking again, along the erroneous lines of stimulus specifications.


I bet absolutely everyone can spot the cognitive tearing. Every single thing in this thread pertains to pictorial depictions1, and the relationships of the spatiotemporal fields.

1 Targetted Fuck Offs notwithstanding.

1 Like

I really tried to select a specific, relevant example to compare here and this is what you come back with?

I’m sorry that your emotional reactions get in the way of having an intelligent conversation.

I respect you a lot and I have diligently tried to learn the lessons that can be applied to the tools as they exist, but the systematic nihilism of upending every single model because they aren’t correct (even though they might be useful) really isn’t helping.

Pushing for improving tools is great. Insulting users for using existing tools that don’t meet your standard of ‘correctness’ doesn’t help anything.

5 Likes

To be fair, a lot of known geniuses throughout the ages have had their ‘madness’ moments in the quest for greater advances and understanding in science and technology. Thomas Edison and Nikola Tesla for instance were not always the kind of people you would want to hang around in a casual way.

Troy was truly instrumental in pushing Blender to the forefront of color science and robust image formation, but I am concerned that we are now getting into the territory of invisible issues that can only be detected by precise color measurements and will not produce a visible improvement for artists. Even more so, fixing those things might bring too much code complexity or too much of a performance hit for the devs. to accept. More likely, what we need now is improved handling of AgX in the compositor (as a lot of color operations there are aging and are out of date)

Perhaps if you really tortured the image through compositing, you might see improvements from some of the proposed next steps, but not a lot of people will grade that heavily.

2 Likes

BRUH

Pardon me my language. It’s 3AM and I’m kind of drunk. But to me the idea of increasing exposure wrt formed image always seemed idiotic if we consider that the amount of light is what it is and doesn’t change after the image is formed.

If our theoretical ideal camera should replicate how human vision works then the amount of light could be decreased to mimick decreasing pup[il size and cone sensitivity before the image is formed.
The only scenario of ‘increasing’ the amount of light should take place in low light - as rods increase their sensitivity in that condition.
All of this excludes a scenario when we get burned or black image - with current models both are not only achievable but pretty common.

Spectra is separated by receptors after all of the shit above and the image formation model should reflect this and do it after the ‘’‘’‘’‘exposure’‘’‘’‘’’ correction not before. That means adjusting individual ‘’‘’‘’‘’‘’‘’‘’‘’‘channels’‘’‘’‘’‘’‘’‘’‘’’ (pssst, hey kid! they exist only in your head!) pre-exposure correction could be considered a criminal offence - if we are talking about getting close to building anatomically and cognitively correct model.

I still need an always-positive (for real situations) Y-like coordinate to take the log. Do I just take Y as the third axis (which would take me out of RGB) or do I take G or the sum R+G+B or what

I think you misread the point of my absurd example.

When folks say “colour management” and wave their hands around, it’s not useful. What is colour management? Is making a picture “colour management”, or is it a separate process? These are the sorts of questions that lead to what a good number of folks are using every day in Blender. It might seem like philosophy, but without addressing these ideas at the granular level, there’s no Filmic or any of the other nonsense.

When someone says that XXX does something that it in no uncertain terms does not, that seems like a problem.

I am 100% on the side of kludging. That is the craftsperson part of this that currently is trapped in between some often unfortunate conditions. That sucks. Most folks know I am always trying to fight for the craftsperson. Always.

If, however, we want to overcome this kludging, it’s likely going to start with trying to wrap our collective minds around abandoning the conceptual frameworks that are in no uncertain terms, completely not “working”. It is going to be challenging if folks insist on some mythical protocol that doesn’t work. That’s all.

Can an author overcome a crappy protocol and a crappy conceptual framework? Of course. Humans are pretty damn amazing. But let’s put the credit back in the people wrestling some of this garbage to the ground, and not in the mysticism of some protocol or standard or framework that did not reduce their labour or craft.

Not sure if still relevant but I think somebody here was looking for AgX DCTL baked into lut. Tested in OCIOFileTransform nodes in Natron and Resolve(Fusion).
https://we.tl/t-ISlfQMX7bp