Feedback / Development: Filmic, Baby Step to a V2?

Not sure what that means but here is comparison after I correct the nodes manually:


I guess the woods look a bit dimmer?

Not sure what this means. It only falls to defualt role if the “input colorspace” field on the node is empty. I am not sure why but the classroom scene does not have that behavior

Also the View Transform falls to sRGB OETF by default across all scenes.

It would suggest that textures aren’t finding their appropriate descriptors for colour assignment. It would be good to know what those are from the command line perhaps.

That’s fine, as Blender’s configuration is a mess, and this is an easy change for the purposes of testing. It’s not so easy changing hundreds of textures so that’s one thing that I’d like to know if it can be solved with aliases.

This can be expected as the nature of the earlier and latter approaches will distort the mixtures of tristimulus values differently. Some degree of differences are somewhat tunable, but they come with a price.

The current attempt tries to work toward a smoothness of tonality, without abrupt sharp transitions. It is also an attempt to yield lower mixture distortions. They will still be present but it is a step toward “better” handling in a short term, without solving larger and complex problems.

For example, it should be easy for anyone to visualize just how distorted the results are by placing a sphere with a plane in the scene, with an albedo of 0.18, 0.18, 0.18 achromatic, and illuminating the sphere via an indirect bounce off of the plane. Changing the colour of the light, and keeping the plane and the indirectly illuminated sphere close to the plane helps to exacerbate the problems.

What should be expected is that the emission colour of the lamp holds a degree of the “sense” of the mixture it is set to, without collapsing to the familiar digital gaudy cyan, magenta, and yellow. It should also be expected that pure primaries will also gracefully attenuate.

That sort of a test, while spinning the “hue” of the colour picker, is a very rapid way for folks to visualize and test the problems.

2 Likes

Here is the command line messages of my fabric cube scene:

Here is the classroom scene:

Not so surprising I would say, since the two mostly used texture colorspaces in Blender have been sRGB and non-Color.

Right, I understand. I only said the “dimmer” for your question of " Is it impacting the output visually?" and I am not sure whether it is the display transform or the texture space transform. Now comfirmed it’s just a by product of changing the display transform

1 Like

Excellent. Thank you so much.

I’m just working on a multi-distribution quick testing version that will incorporate aliases. I missed Non-Color, so thank you for investing time in this.

Yes. The qualia discussion is also open here. As I said, I’m sort of focusing on two “better than nothing” approaches that could theoretically work with spectral rendering as well, to avoid the initial “That looks like crap” response, while also being a “better” entry point for creative grading and such in folks work.

If there’s something that looks horrifically broken tonality-wise, or something crazy happens with exposures, those are the things that are very important to try and diagnose, similar to your first classroom scene which freaked me out.

Thanks so much again.

4 Likes

I’ve started a central place for testing so that it can work with a majority of DCCs including Nuke etc.

Do not use Looks.

There are two displays, and two views.

The displays are:

  1. sRGB
  2. Display P3

The only view to test is AgX in both.

Do not use the Look listed in Looks.

It can be found here.

2 Likes

I see that this version has aliases:

But it still didn’t work in Blender:

I am guessing maybe Blender’s OCIO v2 update was not complete therefore aliases are not supported yet in Blender? That would be a bummer…

EDIT:

1 Like

That’s very odd. It should work for those aliases… Is there a string difference or something I can’t see?

I don’t know… I tried adding that line of code to TBD myself yesterday after our conversation, but it didn’t work for me. And now your version didn’t work for me either… That’s why I started doubting whether it was Blender, but Brecht basically comfirmed that Blender supports aliases… I don’t understand…

It seems Punchy Kraken look is mapping 0.18 open domain to 0.39 instead of 0.5? Why is that? Regular Filmic High Contrast can have sharper contrast while keeping the 0.18 to be 0.5, why not for Punchy Kraken?

I remember you said this for the ACES transform:

And this was in Filmic Blender’s doc:

Five contrast base looks for use with the Filmic Log Encoding Base . All map middle grey 0.18 to 0.5 display referred

I thought it was a good thing to keep?

Because it’s just a random pass at a jacked saturation / crunched look. No technical considerations.

The default has a consistent mid. Still up in the air.

1 Like

Oh Okay. My asthetic taste is still kind of attached to the 0.5 look but I guess it is more of an asthetic choice of a demo look that doesn’t matter that much. I find myself adjusting the “Gamma” inverse power setting in the CM panel to 1.35 to have it become 0.5 though.

I haven’t had a chance to read the full thread yet, so apologies if this has already been brought up.

For my Arnold for Blender add-on, we had to include a full copy of Blender’s OCIO profile because the current version created some issues with naming. The diff of what we did can be seen here: https://www.diffchecker.com/mNnPthya. This might be an Arnold-specific issue, but it doesn’t hurt to bring it up.

Absolutely.

CDL is simply a power function, so it would change exposure positions. If you are running the newer Blender, you can actually create your own CDLs in the compositor using the log, the CDL node, and a proper output. Then you can transcribe the three values and slap them in.

It’s a valid issue given that XYZ should have been CIE XYZ 1931 or something to remove the ambiguity with geometry etc.

3 Likes

A question about spectral image formation, how does it work with spectral images when the value fed to it is the OCIO-config-specified Scene Linear space’s RGB triplets with negative values amount them?

I have been wondering about this, both TCAMv2 and the current AgX sort of work for Spectral even after I use a denoise node to clamp the negative values. I also read from one of the AgX commits that it also clamps negatives. How then, does it know that this pixel contains color that is out of the working space or the display space’s gamut? I once thought it make use of the negative values and convert the pixels to something like CIE XYZ space first and do some sort of gamut mapping. But since it doesn’t make use of the negative values at all, how does it do it?

For example, let’s say we have an EXR saved from the spectral Cycles branch, two of the pixel values in the EXR are BT.709 (0.7, 0.5, -0.3) and (0.7, 0.5 -1), after clamping the negative values they would both become (0.7, 0.5, 0), then how should it know the original XYZ value for each? Or am I completely mistaken about the process and the negatives just doens’t matter? Or if future development requires better handling of negative values, would the denoiser become a problem in that it kills the negative before it can reach the view transform?

This is a terrific question. The question involves mapping inputs to spectral, and then spectral to the working spaces. It is a tiny bit away though. Quite a few little steps to see if this is shorter term feasible.

The current approach can be tested against values with wider-than-BT.709. Some of the images in my GitHub Testing_Imagery repository have wider than BT.709 input values. While it does indeed clip the RGB, the results still “work” in the shorter term “Good Enough” sense of balancing the complexity of the approach versus results. I’m sure spectral renders would be equivalent.

1 Like

This is actually a terrific question that deserves attention in its own right.

What I try to have folks ponder is meaning. Negative values in a tristimulus system are meaningless relative to the “observer” they are currently referring to.

That is, if we had a spectral render and then transform it to sRGB, we have two observers at work; one is the spectral observer, which live as an array of tristimulus values out in the CIE standard observer projection, and the other is a projection of say, BT.709. Once we get to BT.709, the negative values are meaningless nonsense. Just as no display can display less than black hole zero, relative to the BT.709 projection, the values are meaningless.

Back in the other observer land, the CIE Standard observer, we could say that those values hold meaning. So the question becomes… can we give those values meaning in the smaller working space of BT.709? If so, how would we do this?

This is where the notions of the observer “footprint” mapping come into play. The math we perform on those values though have to be carefully considered, as again, not all operations hold meaning relative to the BT.709 “observer” if you will.

Likewise, and this idea builds on top of the above concept, we have to be careful on the journey the values take. For example, imagine we are forming an image from the tristimulus open domain BT.709 values to a black and white representation. Those values in the open domain are “inspiring” the image if you will; we don’t want a literal representation, nor can we even achieve that due to the display limitations.

When we think about denoising in this context, we quickly realize that we likely want to denoise the image, not the open domain tristimulus values. What matters in terms of “noise” are the formed image frequencies, not the frequencies that exist in the open domain. The same would apply for “reconstruction” approaches such as “upscaling”. Here if we use “linearized” values, we will find that our resampling falls apart, and that we are trying to reconstruct the image, not the literal tristimulus points in the buffer. In terms of image here, again, we are interested in things like “Where is the middle range of values” relative to an observer observing the image, not the tristimulus values “middle” value, which is of course not perceptual-oriented.

Hence this leads to a rather controversial (sadly) opinion that we need to consider our information states, and that perhaps the “conventional” wisdom of merely dividing things into the rather arbitrary “scene” vs “display” dichotomy is an insufficient model.

I would strongly encourage folks to consider the idea that the image is a discrete image state, that exists in an interstitial place between the open domain tristimulus data, and the closed domain representation medium.

These are some pretty big questions, but one that have had a fair share of arguments about. I’m firmly of the belief that it is prudent to divide our thinking into three categories:

  1. Open domain tristimulus.
  2. Closed domain, formed image tristimulus.
  3. Closed domain image replication / reformulation.

This leads to exactly two positions for operations that may be required:

  1. Open domain tristimulus.
    1.1 Open domain tristimulus manipulations. Think of this as manipulating the “virtual light-like” tristimulus values in front of the camera.
  2. Closed domain, formed image tristimulus.
    2.2 Closed domain, formed image tristimulus manipulations. Think of this as manipulating the image state, such as denoising there appearance of the image tristimulus, or reconstructing to a higher resolution from the image tristimulus.
  3. Closed domain image replication / reformulation.

Hope that helps to make my sadly controversial opinion clearer. Under this lens, the “negative” dilemma is one that is firmly located at or prior to 1.1, where we would consider those manipulations as “working space dependent”.

4 Likes

About denoising.When get the image in Blender denoised,before or after tonemapping?And if you think outside of the box at Resolve Eg.Then you can place a denoise node at everyplace in the nodechain you want (before or after tonemapping).

Not sure about open domain, but the virtual light in front of the camera makes somehow sence.

I think beside the selected gamut and whitepoint,The most important value with all this Tonemapping topic is the neutral gamma grey of the display device and the EV0 grey in the tonemapping as anchorpoint.

If the middlepoint dont fit,everything gets wrong.Because the middletones around this neutral grey ,like skintones, grass,plants etc are the most important.

This is a good example for the retina tonemapping paper i posted at devtalk.I made some tests with some formulas from this paper in the compositing with the original hdri.I think this works very well.
But instead of denoise it does a cone based light compression or lifting based on light strength and conebased gauss sharpness.

As a side topic,Monitor or TV calibration.I have read about this in the net, and all Display brands have the same problem with HDR footage.How the footage are mastered,vs the max nits the device can display.

Even if you can display say 1000 nits,then you maybe only want 600 nits because its mastered for this amount .Or if you have a OLED that can only display say 500 nits but the footage is masterd at 600,you get the idea.

And the 100 nits reference for SDR looks way to low to me.

And to comeback to topic.As sayed,The Neutral gamma grey of the Display Device seems to me the key point here,as the grey anchor point with tonemapping.

What about Dolby Vision?The Perceptual quantizer seems to make good job?

It can’t. Even if it did emulate a cone compression, the output would be cone response. Which means… what exactly? Answer: Nothing because the idea of “brightness” is lost here, where the crux of the issue is how to attenuate the chroma. We’d have a cone emulation signal on the output axis, which Naka and Rushton have already provided a good degree of research on this potential mechanism about.

Further, the idea that we want to emulate the endless adaptation of the light intensity via an HVS system is a tad on the nearsighted front, given over a hundred years of imagery that isn’t “ideally adapted” to a middle grey.

An image can be incredible because it is dark or very bright, hence these sorts of papers utterly miss the mark by failing to consider that HVS isn’t likely the ultimate goal. Making an image is. Not everyone wants Ansel Adams with spatial facets baked in. Fine as a creative option, but it feels the vast majority of these papers failed to read the prior research in the field regarding the formation of imagery and how it is something different to an overly simplistic view of an image as being nothing more than HVS emulation. Also, given that there are no existing sufficient HVS models, it would seem we are still stuck in a hole.

But feel free to implement it!

The ST.2084 curve was designed for display quantisation. It would and should have limited usage beyond the context of an EDR-like display.

You should make your own tests with it.IIRC the main formular uses the 0.8 as white point similar to your upper Filmic curve.Everything above this 0.8 gets compressed the higher the Luminance gets the more it gets compressed.

Eg your HDR has a very bright sky and a landscape which would display the sky blown white.With the adaption you get a blue sky the landscape is untouched.

The paper is useing a kind of daylight adaption,nothing fanzy and the paper says it is ofc room for optimisation.

The gamma middlegrey is as you know hardware device depending.There are plenty of testpatterns for if black and white line pattern and the middlegrey patch match.

Most old Film content mastered with gamma 2.2, THX uses 2.4 .The thing is every Monitor or TV has its own ideal gamma curve for linear display.But this is maybe to offtopic.

Sure,it seems to me it trys to match HDR to SDR or whatever the destination display is capable.Is Filmic tonemapping not similar to this?Ofc Dolby Vision maybe only reduces/fits the light range of the devices?

What is the goal for Filmic v2, rec709 sRGB ,with optimisation?

Btw you uses HVS everytime to a degree.Beginn with the CIE D65 whitepoint and the chromatic response of the observer.

Have you tested it with a color sweep? like this one:
https://drive.google.com/file/d/1qahO3JxKMBWZjnpgouWDC-78Ij7GEgpm/view

Since you said it works, I am curious to see the sweep results it gives.