I'm struggling with new Agx look

Feel free to follow the development process, test the concepts, and see if a single claim holds any degree of veracity from any and all parties. That’s the benchmark, nothing else. Happy to discuss details, to whatever length is desired.

There’s no preference here. The subject is broader with respect to forming pictures, and upon that ground, one can make very reasonable claims.

For example, suggesting that the default sRGB inverse EOTF encoding is “preferred” will leave all picture authors to arrive at their own picture formation. That’s a massive cognitive burden, and incredibly challenging to get to even a remotely acceptable outcome. Compare against the ability to form an acceptable picture, and manipulate the values either pre or post picture formation in a manner that suits the authorship.

One has a tremendous downside, the other at worst can be considered a minor inconvenience. Remember, this is about the density of a signal’s information, nothing more. That information is not in the render or camera colourimetry, but rather in the mechanic that forms the picture. This is of significant impact to not only generic “click and go” render beginners, but also to more experienced non-physically based picture processing, stylized and manipulated paths, and photographic-like picture construction.

A pivoted exponential function will potentially work here, by way of leaning into the channel-by-channel mechanics. The pivoted exponential function is rather easy to achieve in nodes, either before or after the picture formation. I’ve outlined how to achieve pivoted exponential “contrast” adjustments elsewhere here, as have others.

I apologize, I do not understand this question.

1 Like

Can you define this? I’m not sure what renders the images output by filmic fundamentally unacceptable.

The reason that I ask, is I have seen several users finding the results of AGX unacceptable for their work. I don’t know if you are using a special technical definition of ‘acceptability’ that us layfolk don’t understand, but it kinda comes across as gaslighting.

The pro AGX crowd really hasn’t been doing a great job of demonstrating how much better it is (particularly for users like OP), other than just stating unequivocally that it simply is.

3 Likes

The example I typically give is something like an R=100.0 G=100.0 B=100.0 “albedo” or “reflectance” sphere that is “illuminated” with R=0.0 G=0.0 B=100.0 units of tristimulus. In terms of tristimulus values in the render, the entire shape is dominated with the pure third channel intensities. This is the demarcation point of what I would broadly describe as “render colourimetry” or “camera colourimetry” for a digital camera.

The question is, how should we take these relative magnitudes and form a picture from them?

If we harness a channel-by-channel “curve” to plot the arbitrary magnitudes to some closed domain magnitudes. In this case, the 0.0 units in the first and second channel would perhaps end up “mapped” to the 0.0 display medium value. The arbitrary 100.0 units of the third channel might get mapped to the 100% third channel of the display medium. IE: We end up with a maximal purity “blue” sphere, with a graduation of tristimulus of the third channel to the “shadow”. All is well and good! Or is it?

Imagine we either increase the intensity of the “illumination” or move our sphere closer to the “light source”. Now we end up with a magnitude that exceeds our channel-by-channel curve mapping. The famous “clip” is one way of thinking about this. But another way to think about the problem is to ask ourselves what should happen? This might seem like a ridiculous question, but I would assert there is a massive depth of nuance, complexity, and a jaw dropping cascade of ever more perplexing insights and questions.

If we go to some extreme, we might say perhaps 100000.0 units of intensity should be “white”. But has anyone asked why? This literally never happens in the case of an intensely bright and pure thing. Heck, laser pointers and motion picture lights can be incredibly pure, and all we will see is an uncomfortably intense and pure cognition of some colour. So our first huge question is “Why attenuate the purity to white?”

For a practical visual example, here’s the diver picture formed from the Alexa S35 camera colourimetry, in a localised test strip format:

A few points worth highlighting:

  • Exactly zero percent of the colourimetry of the camera encoding is present in the formed picture above.
  • As we progress from left to right, the purity of the camera colourimetry is shifted in axial colourimetric angle, and shifted in purity in both an amplification and attenuated manner, and shifted in overall intensity of the tristimulus.
  • As we progress from left to right, we should be paying attention to how we cognize what it is that the author has presented. This isn’t a “diver”, but simply an articulation of energy.

Now take a look at the strips toward the right. I promise that exactly zero people on the face of the planet have ever experienced this “diver” as it exists in the picture. I also promise that if we were present adjacent to the camera that we would not have seen anything even remotely close to the stimulus present in the picture, no matter how intense we made the light beacon.

The key point I’m trying to make is that this is a picture, and it exists as a unique human crafted artifact that is parsed by the folks reading this using an incredibly proficient visual literacy. In fact, our ability to read pictures is so proficient, that almost no one stops to think about the whole “Wait a minute… my generic ecological perception never attenuates ‘bright’ sources toward a medium ‘white’. What is going on here…”

I’d go a step further and suggest that not only are we completely unaware of this absolutely peculiar mechanic, we also will insist on it. That is, if we loop back to the rendered sphere, the sphere will “look weird” if we fail to attenuate the purity.




I’d like to draw attention to the question “Why?” That is, specifically, what is going wrong in these pictures? It’s easy to say “Well they are banding!” Or “Well it’s posterized!” Or “Oh it’s clipped!” Or some other nonsense that doesn’t actually inform us with any understanding as to why we cognize something “wrong”. This is the key question to start with when we think about Picture Formation as a complex process.

To give one viable answer, we might look to neurophysiology. That is, our visual system is broadly anchored in a differential model, where inhibition “subtracts” A from B, leaving us with a signal that exists purely as a difference, rather than a magnitude like a camera. Up and out of these differences, we somehow “threshold” into boundary conditions.

The sphere above gives us some notion of an incremental difference as we progress from the “shadow” toward the “light”. At some point however, this staged difference “hits a wall”, and we now get no difference. Aka, we are cognizing some sort of a boundary condition. This is all well and good except as we read and parse the picture articulation, we are given clues that there should likely be no such boundary. Something weird is going on!

I like to have folks ponder the following picture, because it sort of takes the idea of “boundary condition” to an extreme, and helps us to understand that we cognitively cluster boundary conditions based on other clues. Folks here probably have seen it before, but it is relevant to the subject of picture formation:

image

The orientation and the spatiotemporal boundary conditions, depending on the picture reader, may be “scised”, or “decomposed” into cognitive assemblies. Groupings, clusters, part of the same versus other, etc. A sort of “layering” mechanic that assembles these cognitive blobs into higher order meaning.

All of this is to say that when we speak of “Durr let’s make it colourful” we have to at least be aware that within a picture, things are not quite as straightforward. Ultimately, the authorship must govern all in terms of what the author seeks to voice, but for a default state, it is at least reasonable to suggest the majority of image authors would desire an image that “isn’t broken”, where “broken” might be artificial boundary conditions in the articulation, where generally no such boundary condition should arise.

I would then suggest that this is merely a tip of a much larger iceberg.

Let’s look at that diver picture one more time:

I would submit that while “Durr brighter pure sources go white” is indeed not something that happens in generic ecological perception, there is a cognitive mechanic that happens literally for everyone with vision, perhaps thousands of times per day. I’ll let the readers ponder what that mechanic is. What is perplexing is that if this is the mechanic at work, there’s a sound neurophysiological signal relationship that seems to potentially tightly entwined with the phenomena.

It’s not gaslighting at all.

To understand what is “unacceptable” we would need to pursue the ideas above a bit further. I’ve heard many folks suggest that a reasonable entry point to picture formation is undesirable for NPR based work, for example. Nothing could be further from the truth, however.

If we take the sphere example above, and consider it as an NPR based entry point, we have many issues that an interstitial picture formation can aid with. From the render side of things, the intensities would be what they are, and anyone who were writing a complex NPR algorithm would need to figure out what to do. Think about perhaps a basic “Let’s make the ball the same stimulus”. This would be easily achievable with a decomposition into HSV for example, and picking one of the H positions to fill the animation cel-like render. But what if the NPR wanted shading? Now we’d need to decompose the intensities into a thresholded region. Also doable from the intensity data somewhat.

But now imagine we want an NPR “highlight” or several layers of animation cel-like layered outlines? If we take a basic “clip” and make it “white”, that’s one fussy method. But if we instead start from an interstitial picture that has been reasonably formed, we now have an entire palette of purities to threshold from, and form our cel-like highlight elliptical shapes. Key point; how we decompose the sphere into those purity attenuated cel shapes has everything to do with picture formation. An NPR author would need to not only write the algorithm that draws the cel shapes of progression of the shading, but they would also be faced with attenuating the purity in some manner. Which is what we already have to address with a picture formation stage.

All of this is to say that nothing is yet perfect in terms of how we form pictures. I would also stress that while there’s no clear solution, there’s also a very well trodden path of failure. If we fail to attenuate purity, our visual cognition struggles with parsing the picture articulation, which leads to “That looks bad”. Tasking an author with writing an entire picture formation chain from the ground up is a burden that I wouldn’t wish on anyone. If a reasonable entry point can be achieved, then like the NPR example, the results can be bent and twisted with lower labour to meet an authorial intention.

I’d say there’s been hundreds of demonstrations at this point as to why the picture formation is important. I’d also say that compared to the nightmare fuel furor of the original Filmic integration, the feedback has been nothing short of shockingly positive.

The issue above is a very nuanced thing about “fires” in pictures. And like the whole purity mechanic, it is fair to say that the subject of chromaticity angular shifts is one that is complex, and subject to a good deal of Goldilocks “balancing” for the various authorships that use Blender. As I’ve said many times before, I feel that if the AgX-inspired configuration were my own, I’d swing the “red” hue more toward yellow more rapidly, for reasons I won’t get into. That said, in the discussions on integration and the people involved with arranging and designing the configuration that is in the main branch, the decision was made to not swing as far as I would. This was to facilitate folks who do product design work and such, where some feel too much of a swing is problematic.

That’s about all I can speak to in terms of the configuration and the old Salmon Fire debate. That said, this is why grading exists. Blender desperately needs a grading workspace, with vastly better tools and a node setup.

TL;DR: I mostly agree with the general sentiment. I also believe that the general sentiment with respect to Salmon Fire can be swung more or less reasonably to meet an author’s desire via a basic grade pass. I believe @pixelgrip and others have demonstrated how using different techniques.

To the larger discussion about picture formation, I believe that despite the rather minor detail about how much a given colourimetric angle should swing along intensity, the result that ended up in Blender is more than acceptable and a gargantuan leap forward in terms of picture quality. The density of colourimetric picture information that ends up in the formed picture is reasonably solid, and a great base for further manipulations before or after the picture formation.

Here’s to hoping this tremendously useless barf of text finds at least one mind that might become more interested in the incredible complexity of pictures and how we make them.

7 Likes

So, I can see how the way that AGX renders color avoids some of the pitfalls of filmic, and has a much more reasonable output over a larger ‘gamut’. I can see how this makes the process of color grading possible in situations where it would have been impossible with filmic (once you start clipping, or sinking into the N6 hole, there’s no recovery).

It seems like almost all concerns raised about AGX can be solved by color grading. The resistance I am seeing is a lot of people just don’t want to color grade their images. They feel like they were getting reasonable output straight out of cycles with filmic, and now they are getting output that looks unreasonable to them with AGX.

It seems to me that AGX produces output with much less pitfalls in post processing, at the expense of more mandatory post processing, with the admittedly inferior post processing tools build into blender.

Here’s my semi related anecdote: I, like many new CG artists, was hesitant to get into UV mapped textures. I just wanted to do everything procedurally in order to avoid needing to unwrap my models, and needing to have custom textures saved for each object. Uv mapping is more complex than just dropping in a procedural texture. Once I got over that hump, it definitely improved the quality of my work, but it’s still an additional series of steps that I have to deal with each time I need to texture an object.

Particularly for CG generalists, adding more steps to the list of things you need to do to have a good output is a little daunting. Uv mapping is a skill like anything else, and in many large studios, that can just be a person’s job, much like color grading.

I know that if I practice my color grading, I will get better at the process. I know that if I am color grading my images, I will get better results across the board. But it’s one more hat I have to wear now. It’s one more skillset I have to learn. It’s one more step in the process before I can hand off a finished deliverable to the client.

I can see how AGX resolves some of the problematic corner cases, but for myself and many other users, those corner cases haven’t been a problem. But mandatory color grading to make the render look like what I expect it to look like IS a problem.

But again, I know that color grading my images will improve the quality of my work, and using AGX will make color grading easier, so I am begrudgingly accepting the opportunity to grow my skillset.

TLDR: some people were happy with the out of the box look of filmic, and didn’t bump up against the limitations. So to have AGX solve the limitations that weren’t an issue to them, while requiring more effort to reach the look they wanted seems like a downgrade. But if it forces more people to learn about color grading, then it’s probably a good thing overall.

4 Likes

Are the color grading tools in Blender good enough, such that the people who care about it are using Blender for it?

I do not know, I’ve only done the tiniest amount of color grading, and only ever in blender. It seemed hard, but I’m certain my lack of experience has more of an effect than the quality of the tools.

Most AgX proponents I talked to, don’t seem to use Blender for color grading. And when asked how this could be done in Blender, there wasn’t much of a usable answer.

1 Like

I wonder if artist dont understand what a tonemapper does.Tonemapper as Filmic or AgX are needed because you want to display a High dynamic range scene to a sRGB standard dynamic image,that you can display your HDR scene on your sRGB monitor.

Btw The sRGB transfer function is roughly a formula of a gamma correction.This transfer function is needed to display your linear scene on your sRGB monitors.
But these “gamma correction” is no tonemapping as Filmic or AgX,these are usefull for your scenes with Highdynamic lighting as HDRi lighting and lighting with real sun strength values etc.These tonemappers are needed to compress the dynamic range of the scene.As troy described you can not have a higher primary as 1.This works fine for sRGB if you lighting range never goes above 1.But what if you have a lighting of 10 or 100 or 5000?

And they are incredibly intuitive to use when they preserve the hue. So intuitive that you may not even use color grading.

You are kidding right?Nobody sayed that Filmic or AgX is perfect.They are not.

No, I am not kidding. After all, AgX is the default. So everyone gets the AgX look by default. And as we can see, there are quite a few posts of people who are confused by the AgX look because they can’t get the results they are looking for. And as far as I can tell, there isn’t a good solution within Blender to get better control over the AgX look. If you have a good solution which provides flexibility, I am very interested in it!

1 Like

I dont know why AgX is the default and not Standard.But this is not a problem.You can switch to Standard CM all the time and save it as your startup file if you want.

You choose your CM for the case you need.If you render NPR sRGB all the time,then save Standard as your startup file.

If you render with Highdynamic range lighting,then use AgX or Filmic or export your render as EXR and make grading in Resolve.

Defaults matter. Especially if you don’t have the tools to color grade in Blender, then something like AgX might not be a suitable default.

1 Like

If this is your problem, then you have to debate with the devs what they think should be standard.You have to make a voting or whatever at rightclickselect,good luck.

No, defaults don’t matter.
Haven’t we already established this?
Default color of every shader is white…
Everybody is changing it to the color they want without even thinking about it.
Eevee is the default renderer.
I don’t use Eevee. Many people don’t use it and use Cycles instead.
Have you ever heard somebody complaining about Eevee as a default renderer?
Its a non-issue.
There has to be one default setting but it’s usefulness is directly dependent on what you want to do.

5 Likes

Good point,as you hear nobody complaining to activate nodewrangler as addon or other prereferences.

1 Like

This was essentially decided by the people who created AgX. If you look at their conversation with the developers, they didn’t mention the lack of color grading tools in Blender, making it more difficult to deal with AgX in Blender or anything like that. They just showed some pretty pictures and the some technical ones showing the advantages over Filmic.

Sorry, I missed that memo.
I have been told by the AgX proponents, that it is important that AgX is the default, because it is better and leads to better Blender artists and some other arguments. They didn’t get the memo either.

1 Like

Discussing about colorgrading,AgX related topic fine,but about the lazyness to setup your own startup file ?

I am just realizing that each and every time I create a new blend file I manually turn down the samples for the viewport to 32 or 64 instead of 1024 and then I turn on the optix denoiser.
i should create a better startup file.

I would reject that notion too.
Like I said, SOMETHING has to be the default.
AgX is new and has some advantages so the BF made the new shiny thing the default.
I don’t have a problem with it and neither should you.

I am skeptical such an archetype exists as folks spend countless hours picking material colours in an endless evaluation of their work. Like it or not, that’s part of the “grading”. At any rate, if we pretend this archetype exists, and again I do not believe such a persona exists for the most part, we can’t worry about folks who aren’t keen on the craft of pictures. As far as I am concerned, that would be a precisely worthless gaggle of less than worthless voices were such an archetype to exist.

I’d rather focus on people who care about making the pictures in the first place, whom also happen to be the ones that should be valued in the community.

That’s the kicker; when capable folks like yourself begin to explore things in a community like Blender, those results cascade out. You share thoughts and understanding as well as techniques. This is a wonderful thing in the bigger picture, at what is a costly, albeit worthwhile, investment of your time and labour. Folks like you also drive the idea of a more robust nodal colourist workspace into reality as well. It’s just part of growing a culture in a healthy way.

Don’t forget… this was not always this way. In fact, I have close to a decade of frightening hate mail to provide some anecdotal evidence. Folks here would probably laugh at some of the mail. :rofl:

I would scream ”Hell no”. I’ve advocated for a colourist nodal space forever, and perhaps this could be the beginning of such a thing taking shape in Blender. There’s some incredible research worth taking advantage of, and the person who leads that charge in the not too distant future might be reading this thread! We just don’t know! Discussions are helluva healthy here.

Try it. I promise that in a random smattering of 1000 pictures that the authorial control and results will become clearer.

And as @SterlingRoth has suggested, with the right minds, this could lead to something larger. It’s hard to predict, but the next amazing tool around colour and picture authorship is going to manifest out of the nothingness. I’d add that the AgX result also makes Spectral Rendering tolerable, which in and of itself is a reality lurking on the closer horizon. But that too depends on the community and their labour.

6 Likes