Normalize image node?

Hi,

in the compositor, is there a node type or node setup that normalizes an image? That means, for example, if the brightest spot (r, g or b) within the image has a value of 5.0, all values shall be divided by 5 so that no value is >1. I do know that there’s the Vector->Normalize node, but it refers to a single value, not the whole image.
Thanks.

Haven’t found one. But it would be cool for use as a deflicker tool. Also a remap gamma to stabilise extreme corrections. I guess you need to find the top value and invert that for multiply? I’m not sure how you define the largest value. And splitting the RGB would skew the colors. Perhaps use a luma only?

Ooh I found the math

Calculate normalized valueCalculate the normalized value of any number x in the original data set using the equation a plus (x minus A) times (b minus a) divided by (B minus A).

how to apply?

Ok, don’t bother with the maths. It is just the Normalize node on luma. It’s quite good really. Here is an image I forced into low contrast with a curve. Then rectified with normalize. I lost some color info though, as the original wasn’t as contrasty.


Sorry I don’t know why the screen grab wasn’t available.

Interestingly, when i apply a lot of black crush, that is create a very contrasty image. The restored version results in more saturated colors and only modest banding. Is this a form of compression? I don’t see any combing of the waveform though…

Hi 3point,
thx for your efforts. The attachment isn’t available (“Invalid Attachment specified”). I’m not quite sure which node setup you’ve chosen and which node returns the max luma for the whole picture. I think there is none. Is there an image file format that saves values as floating point? Then I could save the image and do this step afterwards.

Anyway, it’s strange how the compositor works. After rendering an image (F12), looking at the “render result” in the image editor, sometimes the “Composite” entry is missing: (yes, compositing is enabled)


I have to select a different image, then again the render result to make the entry appear. And even if the entry is there and selected, the display dosn’t update before I change any value in the compositing node setup. It would also be nice if compositing is done during rendering. I don’t see the compositing result before rendering has finished. But these are different issues.
Thx anyway.

I am sorry. I didn’t realise that the image didn’t appear. The forum has never done that to me in the past.

Hmmm, I noticed that render result was a bit patchy too. Perhaps it’s a bug? Could you make new .blend files to test it and report a bug if reproducible.

If I understand your question well and you’re still pursuing this, the solution is to use the map value node, not the normalize one. See screenshot for a possible setup.


It will clamp values lower than 0 and greater than 1 for all three channels.

Thx to both of you, but probably I didn’t express myself clearly. I want to normalize the whole image. That means scaling the brightness of all pixels by the same factor so that the luma of all pixels are in the range 0 to 1 afterwards. Probably this requires a filter node. Have a look at this:


The red marked part of the cube has luma values >5. I manually had to adjust the value within the HSV node to 0.17 so that the resulting image is in the range of 0 to 1. This should be done automatically. I do not want to click each and every pixel, write down it’s luma value in order to find the brightest one, and after clicking on 2 mio pixels I am ready to compute the required correction factor. This should be done by a filter.

Ummm, isn’t that wath the normalize node does? My example should be able to pull a floating point back to 0 > 1.0

Now I feel…like an idiot…kind of.^^ As I wrote in my first post, I considered the normalize node also. Well, it seems to work…now.

The Normalize node belongs to the “Vector” nodes. As normalizing a vector means changing it’s length to 1 I expected all pixels getting a luma of 1. That’s not what I was looking for. I could swear I’ve tried it despite… (at least I remember, after another one of a zillion web searches containing the word “blender”, I read an article about it…)

I’m really sorry for the fuzz! (Usually) I try thoroughly before asking. arghhh

Not to finish with an excuse…: :wink:
I wanted to do this because I have an indoor scene with light only coming from the outside. Due to indirect lighting, this requires many samples to get a good image, and the background/world lighting has to be pretty bright. To get good results in much shorter time, I decided to make the walls transparent for incoming light. With that direct light now lighting the indoor objects, light settings could be reduced, and rendering became a lot faster. However, the objects where overexposed. So, my goal was that automatic normalization, the topic of this thread, no matter if walls are rendered or not. Sufficient for everyday work.

So, sorry again!

Who cares about our ideas passing like ships in the night. At least I learned something new.

I’ve learned from you! :slight_smile:

Ummm… I don’t dare to say… I’ve tried it with the default scene as in the screenshot, and it worked. Now I notice it works different from what I thought when used with a colored scnee, i.e. (of course) I have to set it up like you did (splitting and recombining color values). That’s complicated. Well, but still “[Solved]”.

Yes I gather that the brightness or luma values are what moves not the color values. But recombining extreme excursions don’t show this. Interactions are odd, but it seems to work. Sort of.

Unfortunatelly, the normalize node also darkens the dark areas. That’s probably what you’ve also noticed before (and what I didn’t understand at that point in time).

In other words:

Image luma range: 0.2 to 5
Range after normalize node: 0 to 1
Wanted: 0.04 to 1 (i.e. division by 5)

I guess there’s no solution for “wanted”.

Considering that regardless of how a pixel value is actually computed, it is ultimately going to show as white if it is higher than 1.0, then I’d think that the map value node will give you a practical solution (see screenshot).


If you want more control, then I’d think that it’s still possible to do what you want if you separate the channels of the image and then dived them with a number. In the screenshot I’ve muted the math nodes which divide the channel values with a constant but I hope you get the idea.
Is this what you mean to do?

But is that mapping or clipping? It sort of seems to be clipping.

Another thing I was wondering was how to restore a mid value or gamma correct an image. That is auto correct overexposed images, or remove brightness fluctuations in flickering footage. I tried deriving an average to invert and feed to a gamma node. But I’m not sure of the best way to average the image (blur seems expensive computationally).

blendercomp,

I admitted I was wrong regarding the normalize node, but does the map node also take the whole image into account? I had no problem keeping the values in a certain range, but my point was how to scale the whole image by one factor.

I thought normalizing an image is everyday work. :slight_smile: Like you normalize an audio recording. All samples are multiplied by the same factor so that the final recording just fits in the full range. Or maybe you want the maximum at 80% or -n dB. (I know, with audio, too high values are automatically cropped but the process would be the same).

Maybe I’ll write my own shader node type. I already did it a while ago, but it required enabling OpenSL which slowed down rendering, so I’ll think about it.

Thanks anyway.

clipping is involved in the case of the map value node which cuts off blacks and whites
But the other example with the math node allows one to divide the whole range of the channel with a single value, which is what Willi requested as far as I can tell.

Another thing I was wondering was how to restore a mid value or gamma correct an image. That is auto correct overexposed images, or remove brightness fluctuations in flickering footage. I tried deriving an average to invert and feed to a gamma node. But I’m not sure of the best way to average the image (blur seems expensive computationally).

I’ve tried that in the past with the compositor and couldn’t figure it out. If I understand how it works correctly, it doesn’t allow one to store the rgb values per pixel so as to do computations with it. My guess is that scripting is involved.

why seperate it to RGBA when you could just seperate it into HSVA? somethign like this


nope, unfortunately it clips blacks and whites to the 0-1 range

I had no problem keeping the values in a certain range, but my point was how to scale the whole image by one factor.

I’m pretty sure that computationally it is wrong, but you could try to divide (or multiply) the whole range of values with a single number.
I’m just wondering what is this number going to be and how it is going to be derived. In your example, you mentioned dividing with the highest value (5) and of course it would clip pixels with this value to 1. However, it wouldn’t work proportionally, that is it would darken most of the other pixels (depending on their distribution of course).

I thought normalizing an image is everyday work. :slight_smile: Like you normalize an audio recording. All samples are multiplied by the same factor so that the final recording just fits in the full range. Or maybe you want the maximum at 80% or -n dB. (I know, with audio, too high values are automatically cropped but the process would be the same).

I see what you mean but I don’t think that image data is the same as audio data. Both value mapping and clipping are typical operations on images.