Feedback / Development: Filmic, Baby Step to a V2?

About denoising.When get the image in Blender denoised,before or after tonemapping?And if you think outside of the box at Resolve Eg.Then you can place a denoise node at everyplace in the nodechain you want (before or after tonemapping).

Not sure about open domain, but the virtual light in front of the camera makes somehow sence.

I think beside the selected gamut and whitepoint,The most important value with all this Tonemapping topic is the neutral gamma grey of the display device and the EV0 grey in the tonemapping as anchorpoint.

If the middlepoint dont fit,everything gets wrong.Because the middletones around this neutral grey ,like skintones, grass,plants etc are the most important.

This is a good example for the retina tonemapping paper i posted at devtalk.I made some tests with some formulas from this paper in the compositing with the original hdri.I think this works very well.
But instead of denoise it does a cone based light compression or lifting based on light strength and conebased gauss sharpness.

As a side topic,Monitor or TV calibration.I have read about this in the net, and all Display brands have the same problem with HDR footage.How the footage are mastered,vs the max nits the device can display.

Even if you can display say 1000 nits,then you maybe only want 600 nits because its mastered for this amount .Or if you have a OLED that can only display say 500 nits but the footage is masterd at 600,you get the idea.

And the 100 nits reference for SDR looks way to low to me.

And to comeback to topic.As sayed,The Neutral gamma grey of the Display Device seems to me the key point here,as the grey anchor point with tonemapping.

What about Dolby Vision?The Perceptual quantizer seems to make good job?

It can’t. Even if it did emulate a cone compression, the output would be cone response. Which means… what exactly? Answer: Nothing because the idea of “brightness” is lost here, where the crux of the issue is how to attenuate the chroma. We’d have a cone emulation signal on the output axis, which Naka and Rushton have already provided a good degree of research on this potential mechanism about.

Further, the idea that we want to emulate the endless adaptation of the light intensity via an HVS system is a tad on the nearsighted front, given over a hundred years of imagery that isn’t “ideally adapted” to a middle grey.

An image can be incredible because it is dark or very bright, hence these sorts of papers utterly miss the mark by failing to consider that HVS isn’t likely the ultimate goal. Making an image is. Not everyone wants Ansel Adams with spatial facets baked in. Fine as a creative option, but it feels the vast majority of these papers failed to read the prior research in the field regarding the formation of imagery and how it is something different to an overly simplistic view of an image as being nothing more than HVS emulation. Also, given that there are no existing sufficient HVS models, it would seem we are still stuck in a hole.

But feel free to implement it!

The ST.2084 curve was designed for display quantisation. It would and should have limited usage beyond the context of an EDR-like display.

You should make your own tests with it.IIRC the main formular uses the 0.8 as white point similar to your upper Filmic curve.Everything above this 0.8 gets compressed the higher the Luminance gets the more it gets compressed.

Eg your HDR has a very bright sky and a landscape which would display the sky blown white.With the adaption you get a blue sky the landscape is untouched.

The paper is useing a kind of daylight adaption,nothing fanzy and the paper says it is ofc room for optimisation.

The gamma middlegrey is as you know hardware device depending.There are plenty of testpatterns for if black and white line pattern and the middlegrey patch match.

Most old Film content mastered with gamma 2.2, THX uses 2.4 .The thing is every Monitor or TV has its own ideal gamma curve for linear display.But this is maybe to offtopic.

Sure,it seems to me it trys to match HDR to SDR or whatever the destination display is capable.Is Filmic tonemapping not similar to this?Ofc Dolby Vision maybe only reduces/fits the light range of the devices?

What is the goal for Filmic v2, rec709 sRGB ,with optimisation?

Btw you uses HVS everytime to a degree.Beginn with the CIE D65 whitepoint and the chromatic response of the observer.

Have you tested it with a color sweep? like this one:
https://drive.google.com/file/d/1qahO3JxKMBWZjnpgouWDC-78Ij7GEgpm/view

Since you said it works, I am curious to see the sweep results it gives.

Tbh no,i have tryed to rebuild some formulars from the paper,for testing the compression,but i dont know how they coded all this formulars as whole TM function.
Would help to see the source code.

Here some tests,the luminance through the main compression formular as output.


And your sweep through the compression.

Hmm you noticed the notorious six, right? It doesn’t look like “it works”

The upper images are the original images displayed w/o function.

Then I think I need to wait for the colored results to judge then

Right, I understand that we want to denoise the image, not data. When you put the idea down to Blender though, there comes some problem. All Blender’s existing denoising workflows denoises the open domain data, either before or during the compositor, which is definitely before the view transform. If we move it to after the view transform, there will be some problems like “How do I save a denoised Open Domain EXR like before?” etc. Therefore attempting to move the denoising step to after view transform will be very hard I think.

Instead I just had an idea, why not have the denoiser operates in CIE XYZ space so there are no negative values? Just convert the image to XYZ first before feeding it to the denoiser, and convert it back to the working space after denoising, like this:

(I copied the Linear XYZ IE stanza from the Spectral config, not sure whether copy & pasting between configs is a right thing to do)
(Also my gosh this convert colorspace node is important)

And the result we get is denoised open domain data while keeping the negative values for view transform to operate on:

Not sure if this will work, also not sure how hard it is for Blender or Intel/Nvidia to add the XYZ conversion to their denoisers. Just an idea that maybe we can keep the negative values while still denoise the render in open domain.

Again, think about input along X and output along Y. If one assumes an emission input along X, the output is cone response or RGB or whatever tristimulus model you like. It does not solve anything. Luminance is not a useful predictor for brightness, hence it comes down to the same problems with chromatic sources. It’s either the same problems, or some approach that deals with brightness appropriately.

I have yet to see a single model tackle brightness. Every single approach either fails to address it, leans on the accident of “crosstalk”, or assumes luminance as the underlying metric of brightness.

None of those provide larger solutions. Some can be quite acceptable, such as the K1S1 use of accidental crosstalk, which also happens to be the way AgX above handles it. However, very few identify any actual mechanism.

This is in fact wholeheartedly undesirable, and in fact forms the basis of the problem.

Take the blue sphere example; the mapping of any input X blue to peak Y blue is simply going to be wrong.

Absolutely.

And this is part of the reason I view EXR tristimulus data as just that; data, not an image.

That is, alternatively, we can think of two potential places for a denoise operation. One potentially in the frequencies of the open domain data, and one in the frequencies of the closed domain image.

The basis vectors of the XYZ projection are non-orthogonal. As a result, different working spaces lead to different results. Further still, relative to any given observer such as cameras etc, will always stand the risk of negative values relative to a standard observer. That is, the moment folks think about XYZ and projections into it, it is valuable to view them as two projections; the meaningful values and the meaningless values.

The larger issue is determining what is noise, and the expression of that noise will be dependent on the state of the data and the meaning along a pipeline.

XYZ is likely going to lead to less than optimal denoising as a result.

I suspect denoising on an open domain range versus a closed domain image could be considered different operations that could solve different problems. “Noise” in the broader aesthetic sense, should always likely be considered from the image formed, because that is after all what we end up looking at, and the amplitude of those noise regions are relative to the amplitude of the signal regions in the image.

1 Like

If you dont make use of the relative values of brightness/range in a HDR image for a TM, what values to use are left?
In the end you still need a sort of curve or compression right? how would you get otherwise a HDR displayed as SDR on a SDR device?

The only solution that i think of, would be the classic camera range of maybe 2 stops, or what the display can handle, within the selected relative EV0 exposure.But this is what we have allready.

The important question is hidden in the assumption.

As values increase in emission, they create a sensation of “brighter”. However, as a tristimulus mixture changes hue angle and purity, the sensation of “brightness” also changes.

Even if someone had a reasonable calculation of “brightness” units, it would require more than the simplistic curves most approaches take to form it into an image.

Hm what about WCG?You could render with rec2020. Then hdr render can make use of more colorinformation.And for TM to rec 709 you have maybe the missing color data, you dont have without a wider gamut?
I know that rec2020 can not displayed with todays displays (except maybe some reference prototyps).

The idea is to have more color range to work with, what you dont have with rec709.

Yes i think most of us know this.

But you can only work with what you have now.And the question is how to display hdr content as SDR as good as possible?
I have read about this and the “best” or actual used TM is something with Hybrid log curve and PQ.This dont mean to be perfect,but is the actual tech used today.

As seen in other failed attempts elsewhere, the values still have to get compressed to a smaller gamut, and make a kick ass looking image.

“Make use of” implies there’s something useful there. What is important? The total range of values or what ends up in the image?

Does some display curve matter, or does an image?

Is something useful there?The idea is that with wider gamut the color purity and saturation are maybe finer distributed with higher values.I dont know,tbh.If yes, then you could make use of this for your TM calculactions to get more differantiate data for the highlights in question.
Do you say there is nothing extra what you can use, with wider gamut at higher brightness ?

Both,in your case the image ofc.

After looking forward for more information about this color brightness topic, if found this
site.I think Björn Ottosson was researching the same problem with gamut clipping,perceptual brightness etc.

can we not make something usefull from this?

No. Mostly useless.

I still don’t understand why the aliases doesn’t work for my older *.blend files. It obviously works within the config itself, e.g. roles can use the names listed in the aliases and still work. But in Blender when I open old files it just doesn’t work. Node Wrangler addon’s “Shift Ctrl T” auto-import selects Generic Data correctly, but I think it is using the data role instead of the aliases.

I am just so confused…

I suspect you are correct.

Mike Pan noted that Blender appears to not be using the proper role for data either. Perhaps worth filling a bug.

1 Like

Done

A rather awkward report since the steps to reproduce requires them to modify the default blender config, it would have been even more awkward to ask them to download AgX for this bug since AgX is not even on the code review stage yet…

Edit: The report has been confirmed.

3 Likes