Any information about tone mapping

After all the recent Gamma related conversations such as THIS ONE
I wonder how to make the most out of the Tonemap compositor node. Any clue is welcome…
Like, how does it differ from applying simple gamma and/or color correction (one of the tonemap’s node option is gamma)?

Hi, interesting question, i can’t figure how to use the tonemap node too. i fear it is not very usefull … i’ll explain…

By the way , i think that node is not related with gamma correction / linear workflow, my guess is that the gamma parameter is there because you need to specify the gamma of your input image for the tonemapper to work .

What i learned about tonemapping is that can mean different things :
(Warning! these are my guesses and opinions please correct me if i’m wrong!!)

-tonemapping is - generally speaking - various operations that involve ‘mapping’ -meaning adjusting/changing- the lightness value of an image to make it more pleasant or correct for the human eye.
(Linear workflow technically is a tonemapping operation, but in practice, means doing a very different thing -that is making your render do the right math when calculating light )

  • a tonemapping compressor is a filter apllied to a photo or render , it compresses the extension of light values of an hdr image (floating point) to a ldr range (8bits per channel) so it can be displayed on monitors.

-a tonemapping enhancer is also a filter , it does compression but also uses local contrast and lightness adaptation to produce a distinctive look ( those cool photos that look like paintings )

So -if i’m right- when you render with blender internal ,yafray or vray , the renderer is already using a internal compressor tonemapper to display the output on your monitor. (again this is independent from linear workflow)
if you don’t have excessively bright or dark areas then the compression is already good .
If you do have this problems you can do the same job as a tonemapper with rgb curves , levels or bright/contrast .
Also remember that all biased renderers perform their tricks and adaptations, so if your picture has a very bright area is very likely that all details in that area have been skipped and you can’t get much more info by tonemapping it (even if the output is hdr!)

I think compressor tonemappers are most useful for real photography where you can have a huge range of values with hidden informations.

Tonemappers with enahancing features -instead- are very cool for postproduction of renders ! i think the node included in blender is just a compressor , give a try to these open source projects instead:

http://qtpfsgui.sourceforge.net/
(standalone tonemapping utility similar to the commercial photomatix , has lots of various algorythms for tonemapping)

http://zynaddsubfx.sourceforge.net/other/tonemapping/
(very interesting little utility : a ldr tonemapper ? yes… because it’s focused on local enhancement and unsharp masks - of course not on compression)

Okay , i hope i’m not totally wrong and just confusing people… and i also hope some coders are interested in implementing some similar enhancement node in blender :eyebrowlift:

Hi, interesting question, i can’t figure how to use the tonemap node too… i think it’s not a tonemapper for enhancing renders , but rather a tool useful for creating hdr from photography, i’ll explain…

By the way, i think that node is not related with gamma correction / linear workflow, my guess is that the gamma parameter is there because you need to specify the gamma of your input image for the tonemapper to work .

(Warning! theese are my guesses and opinions… be doubtful and please correct me if i’m wrong!!)
What i learned about tonemapping is that can mean different things :

  • generally speaking , various operations that involve ‘mapping’ -meaning adjusting/changing- the lightness value of an image to make it more pleasant or correct for the human eye.
    (Linear workflow technically is a tonemapping operation, but in practice, means doing a very different thing -that is making your render do the right math when calculating light )

  • a tonemapping compressor is a filter apllied to a photo or render , it compresses the extension of light values of an hdr image (floating point) to a ldr range (8bits per channel)

-a tonemapping enhancer is also a filter , it does compression but also uses local contrast and lightness adaptation to produce a distinctive look ( those cool photos that look like paintings )

So -if i’m right- when you render with blender internal ,yafray or vray , the renderer does sort of a compression tonemap internally to display the output on your monitor (again this is indipendent from linear workflow)
if you don’t have excessively bright or dark areas then the compression is already good .
If you do have this problems you can do the same job as a tonemapper with rgb curves , levels or bright/contrast . i found those much easier to use than tonemap node (tutorials / threads is saw around seem to confirm this)
Also remember that all biased renderers perform their tricks and adaptations, so if your picture has a very bright area is very likely that all details in that area have been skipped and you can’t get much more info by tonemapping it (even if the output is hdr!)

I think compressor tonemappers are most useful for real photography where you can have a huge range of values with hidden informations.

Tonemappers with enahancing features -instead- are very cool for postproduction of renders ! i think the node included in blender is just a compressor , but give a try to theese open source projects :

qtpfsgui : standalone tonemapping utility similar to the commercial photomatix , has lots of various algorythms for tonemapping, some are easy to use and/or do local enhance.

LDR Tonemapping : (by Nasca Octavian PAUL) very interesting little utility : a ldr tonemapper ? yes… because it’s focused on local enhancement and unsharp masks - of course not on compression.

Okay , i hope i’m not totally wrong and just confusing people !.. and i also hope some coders are interested in implementing some similar enhancement node in blender : )

(ps i posted no links because they blocked my post for anti spam reason - being my 1st on this forum)

I’m not entirely sure what it does either, but I have always understood Tonemapping to be mapping data from an HDR image to a more ‘normal’ file type such as .jpg or .png

Blender renders to floating point, which is essentially an .exr but then saves ‘down’ to the chosen file extension, so I would guess the Tonemap node would be a form of, or work like gamma correction, but obviously is not simply gamma correction as there is a Gamma node for that.

Is it used to approximate the appearance of a High Dynamic Range Image?

From the wiki:-

http://wiki.blender.org/index.php/Manual/Compositing_Nodes_Color#Gamma

Some other clues:-


http://photoshoptutorials.ws/photoshop-tutorials/photo-manipulation/layered-hdr-tone-mapping.html
http://www.dpchallenge.com/tutorial.php?TUTORIAL_ID=60
http://blog.tasuki.org/tone-mapping-with-gimp/

Tone mapping is the compression of the dynamic range of a HDR (High Dynamic Range) image to the low dynamic range of a LDR image ( Low Dynamic Range).

Simply linear compression will not work, as the eye reponds non linear to the big dynamic ranges. The several available tone mapping algorithms try to model this behaviour.

Tone mapping in its true meaning is not a filter which adds artistic expression, allthough some of the tonemapping done in photographic HDR manipulation SW produces quite strange results.

Qtpfsgui and related pfstmo offer a quite comprehensive list of tone mapping operators.

Unfortunately I do not know which operator is used in the blender node, and quite frankly I didn’t get any convincing results so far. Usually I save the rendered HDR Image as EXR and use qtpfsgui to do the actual tonemapping.

Hi,
thanks for the replies,
I agree with the things said by Loramel and Organic.
It’s true that “Tone mapping in its true meaning is not a filter which adds artistic expression” but -i think- its ‘side effects’ are often used to add a painterly effect to pictures.
I posted about that ldr tonemapper exactly for that reason , i helped me clear my mind about the difference about compression and local enhance - which is not the orginal meaning of tonemapping -but an interesting and popular side effect.
I use the terms compression and enhance because those are the two options available in photomatix -and i think it works to explain it to end users like me.
However ptfsgui uses the real names of the algorythms, and it’s true that they all try to reproduce how the human eye percieve colors.

it is also true that this compression is not linear , maybe exponential exposure can be considered the simplest approximation but ptfsgui methods are much more complex , and also take into account local adaptations etc…

I think this topic can be very confusing because of the flexibility of hdr (and exr) formats
I can think of two cases:

Hdr assembled from photos , might look horrible or burnt out without tonemapping , this is because the image can have huge range of values from 0,x of dim-lit objects to the millions of the sky and sun.
This kind of hdr requires tonemap just to compress it to monitor range of 0-1, and give it a nice distribution inside that range.

Hdr from renderer : this generally has values from 0-1 or little above 1 in highlights. This does not require compression to display it , but is still very useful to have floating point data to do more precise postprocessing ! (example: avoid banding)

As a note : some renderers do produce also hdr images with ranges 0-millions, in vray this is called “unclamped” in indigo “untonemapped” , i guess having this kind of output only makes sense for physically based or gi engines…

This is a complex subject because of the various uses and applications of hdrs … so it’s good to know how other people interpret it…

edit:i added this ‘2 cases’ example in reply to Organic saying tonemap is used to make HDR into normal formats.
This is true for case 1 (hdr with range of values well above visible: 0-millions )
But most of the times renderers use them only to save pictures with better quality inside a 0-1 range. (equivalent to the 0-255 range of ldr pictures but with much more steps)