Cycles tonemapping

Can someone simply explain what tonemapping is in relation to rendering? I’m researching it on google and the results are mainly about HDR imaging. I’m sure it’s exactly the same, but I think it’d be easier to understand if I could have it explained strictly within the realm of rendering images and how tonemapping has an effect as a post-process procedure. If tonemapping is something that can make our renders look more photorealsitic, I’m all for understanding it 100%…

:yes:

Imagine that the black color on your monitor has a value of 0 and white has a value of 1. The greys are something in between 0 and 1. The brighter is the grey the bigger is the value. It can happen that you render something brighter than the white (for example a strong lamp) which has a brightness value bigger than 1 (for example 3). But your monitor shows only 0 - 1 range so it clamps the bigger values to 1.
Here comes tonemapping. You can smartly map the full dynamic range of your scene (in our case 0 - 3) to what your monitor can show (0 - 1).
Hope it’s clear to you now :slight_smile:

@OP

http://www.spot3d.com/vray/help/200R1/render_params_colormapping.htm

Are you talking about something like this ?

@rawavalanche First off: Nice work. read your Escherick House article on ronenbekerman. Great work.

As you may have read from this thread, no what you ask is not possible yet.
I love blender and use for most everything yet while the cycles workflow is great in my eyes, it could be spectacular with the addition that you mention:

A way to feed the “in progress rendering” directly to the compositor and view your composite directly in the interactive viewport. This will cause severe lacking but it would be a very efficient workflow that could offset the lack.

Do anybody with deeper understanding of blender know if this would technically be possible at some point?

Just like using “render” settings in the preview window, it will likely require the depsgraph update to be done before it’s possible. The render compositor simply isn’t linked to the viewport at the moment.

You can tonemap your renders and set the range as you wish, to avoid burnt whites. I will post a little example and how to.

However what would be highly desiderable is the ability to see this in viewport rendering. Doing the scene setup with half the image burnt out and than adjust in post is not the best/fastest way. A simple bool flag in Render tab under Film would be pretty nice, where we have exposure / filter width / transparent options.

This is what i meant, and it’s not using Compositor, it’s only available for f12 rendering atm though, but i guess it won’t be that hard to display this Color Management stuff while viewport rendering (i hope). As i said, we have “exposure” under Film tab in render panel, it’s only affecting display, so wouldn’t be possible to add a bool flag “display color mangement” this tab too? Current workflow kills the benefits of interactive viewport rendering, because you must run f12 rendering to see color management affecting the render.

Here is an example of tone mapping a Cycles render, keeping the overall brightness / contrast, setting up a value range, and bring back the mid tones we have lost:

Straight render, look at the books
http://db.tt/2dTEIsBI

http://db.tt/nGwr5QGg

Tone mapped (no compositor, adjusting Color Management tab in Scene panel), look at books, much better, also the bed and wall is kept the same:
http://db.tt/ZWn3gwUH

http://db.tt/vAXz8VDA

Another clear example, same method: - NON tone mapped - Sphere and book are gone.
http://db.tt/FPGtyHwc

Tone Mapped
http://db.tt/XAuMq2U5

Im suprised that the octane tonemapping was mentioned… but the implementation into the blender compositor wasnt… if you enable the ‘Film Response Curves’ Addon, create an RGB curves node in the compositor and then in the properties bar (hotkey n) down the bottom should be ‘Film Response’… this has the same film response curves as octane as they are all public domain from here – http://www.cs.columbia.edu/CAVE/software/softlib/dorf.php

It has been almost three quarters of year and i am very disappointed this is still not resolved :frowning:

While color management offers some partial solution to this, it does not effect interactive rendering, which is the main point of Cycles. Auto-application of post processing is even worse solution.

Tonemap node seems to do “something” to the render, but you can’t really control what as there is no way to find settings in the node that would exactly match the input it goes into node, so there is no way to find some baseline FROM which you can start to adjust the image. Partial workaround is to separate stream and then feed tonemapped image into one of the mix node slots, and original into other one, and then control amount of tone mapping, but that does not change anything on the fact that you can not control how much tonemapping is performed within the tonemap node.

And that does not change anything about not being able to work with tone mapped image.

Doesn’t anyone have a power to pull some strings with Cycles developer? It is just one value spinner that would sit in the Film rollout or render settings right under the exposure spinner. :confused:

Awesome pics in your portfolio man! Cycles is awesome but I agree with you that most realistic renders are made with other render engines.

I think there is some confusion happening with terms that are being set. I think what is being talked about is a linear workflow. If you go to the scene tab, you can change your color space to linear.

I understand that you cannot correctly preview what you are rendering if you are now in a linear color space. You can use some janky ways to to correctly RENDER your color swatches and textures but I have to agree it would be nice to have a “gamma correct” node that also has a color swatch tied into the node so that swatches and textures can be correctly configured in one swoop. Maya has this. It simply allows you to choose the color and then use the .45454545 values in RGB (for 2.2 gamma color space) so that you don’t have to gamma correct your textures in Photoshop and also corrects your color swatches if you are not using a texture.

The idea of using a tonemap node on a camera is not simply to do tonemapping. That is done in post. It’s for anyone using physical “real world” values when lighting and rendering. Maya can also, generally, tone map without a camera node. But it has that node as an option if you want to use real world camera stuff like fstops. It’s also so you can correctly see how your render will look when you tonemap in post and there are no surprises. So, on one hand, tonemapping should be done in post. On the other, it’s nice to artistically judge how your colors will look when you finish the tonemapping process in post because everything is going to be darker without it.

For example, in Maya, using the sun&sky environment node will blow out your scene to almost completely white. That’s because it uses real world properties of light which are much, much brighter than your 0-1 values and because of this, you have to use a photographic exposure node in order to “see” your scene.

So there would be several reasons why a node that plugs into a camera that does either general or more advanced tone mapping is needed but it really comes down to being able to visualize, artistically, what you are rendering before you scrap the tonemapped node and send it off to post.

Also, this might be pretty confusing to even people coming from another program so, even though there isn’t a way to preview a render that is generally tonemapped, others may even not know how to enter into a linear color space to begin with. Here is that information: http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.64/Color_Management

It is in the scene tab which is next to the Render Layers tab and two over from form the Render tab in your Properties panel.

I have more than good knowledge of linear workflow, and tone mapping is something very different. Tone mapping is compression of dynamic range to the range that monitor can display. While linear workflow is essential to display linear data correctly on sRGB monitors. And yes, blender 2.9 now offers possibility of color management in viewport, and therefore also preview renderer, but it still does not offer basic tonemapping to reduce burning of highlights, for example Reinhard implementation. I can now make my renders look shitty like instagram, thanks to the huge library of camera response presets, but i still can not meaningfully reduce burned out areas. RRT does not do the trick as it shifts gamma, so it washes out colors.

The workflow to do tone mapping is post is flawed as well, because most of the time you need to work in WYSIWYG fashion. You would understand that if you bothered to read at least one of my previous post, where i mentioned at least one example. And only then, you can chose to save tone mapped output at the cost of losing some HDR data, or save linear output and try to reproduce tone mapping function in post. But i can not even remotely imagine working with nuked out picture just to hope i might be able to fix it in post.

This is exactly kind of reason why Blender is so hostile to professional users. While it has 90% basic functionality on great, or at least average, it misses the other 10% of absolutely essential functionality, and when it eventually gets implemented, it is often done very wrong. Like in this case.

rawalanche, very professional content. Have you talked directly to Brecht about your problems and insight to the situation, or have you submitted a bug report detailing why this is incorrect implementation? The devs do not read these threads, and until my inbox showed me your response here, I had no sight of this again. I fyou could talk to Thomas Dinges (Dingto) or Brecht on IRC or email them, then maybe they could take a look.

If you didn’t know already, the light falloff node has a basic control that allows you to reduce or eliminate the burnout seen in highlights.

To use, plug a light falloff node into the strength input of an emission node and set the smooth value to something above 0, this can be set to some pretty high values so it’s able to eliminate burnout for really strong lamps as well.

Guys, stop suggesting workarounds. Topic starter clearly describe what he want, and why, and reading his posts he already know that your tricks (completly unrelated to his problem). His point just to ask for almost-realtime compositing in Cycles preview window, at least tone mapping node setup. That will allow much faster and intuitive interactively material/light settings.

I see two problems.

  • Reinhard tone mapping require knowledge of picture brightness/contrast, so it canot be done fast, on-fly at least need 2 passes, started after all frame tiles complete and saved in funal buffer.
  • More accurate tone mapping need even more passes, big radius Gaussian filter and other things. It just too complex to be done in real time, and Cycles preview window will be more sluggish, depending of viewport size.

Maybe there will be nice to utilise OpenCL and implement reduced Reinhard and bloom (thesholded Gaussian layer that mimic human eye imperfectness just before tone map), with similar to Compositor parameters? Ideal solution will be just very fast Compositor, updating in reasonable interval when Cycles preview have changed.

@rawalance
Rendering to OpenEXR (or Radiance HDR) means you don’t lose any of the data, so even if it looks burnt out in the viewport, you can adjust it in post and not lose any detail.

However, I guess what your looking for is somewhat similar to the way LuxRender can tonemap images during render using:

  • Linear (GIves you ISO, F-Stop, Shutter and Gamma options)
  • Reinhard, which has 3 options (pre and post burn, and one other I think)
  • Autolinear (auto selects linear values)
  • Maxwhite (useless IMO)
  • False colours, which maps colours to the brightness of the image (like heatmap almost) with adjustable linear, log and log3 values and min/max values

Whats nice is each of these can be altered and switched between while a render is progressing, and I use them a lot (linear one), and I then can export to OpenEXR or .png. Sadly Lux doesn’t feature an in viewport render, but on the fly adjustments reduce the amount of re renders a lot.

You guys are trying really hard to convince a person that obviously made up their mind before posting that Cycles will not work.

I really do not understand why the average artist would need three (or four) different options for linear tonemapping in Luxrender to be honest (especially since it can now emulate film through camera response curves).

Last I messed with them, they all seemed to do the same thing (which was brighten and darken the image), it’s almost as mystifying as why Luxrender has three different power settings for lamps when two of them are just multipliers (the only one that’s slightly different is gain and that is because you can change it in realtime as part of lightgroups during the render).

Both your points really do go together with the same answer:
The ISO, F-Stop and Shutter emulate that of a real camera, with ISO and shutter actually having linear effects, while the f-stop select able options are linear, but 1 vs 2 is not a factor of 2 (based on diameter of aperture)

The lamp options (I presume Gain, Power and Efficacy) are the 3 lamp parameters:
Gain: an arbitrary multiplier for the strength of the lamp, this is better adjusted using light groups gain (same thing, except lamps can be grouped)
Power: The power of the lamp in Watts (W). This value can be matched with a real lamps IES file to get a correct simulation of the lamp given a correct:
Efficacy value is given. Basically how efficient the lamp is: Ie some Incandescent globes may convert 17% of the electricity in W into actual light, while an LED globe might be 80%.

The reason these options exist is simple: LuxRender is physically based and (strives to be) physically correct.
This works rather realistically with the sun lamp, which has a default power matching our sun.

Basically if I then have a 100W incandescent 17% efficacy globe in my scene, a 20W fluorescent 60% efficacy desk lamp and a 30W monitor (?? idk what efficacy), and light coming in through a window by a sun lamp, the result will be what a camera would take, which you can tonemap with settings that matched a dslr for that situation in real life. IES files on all the lamps really helps add to the realism as well.

Lowering the sun lamp in the sky will increase the relative strength of the interior lamps, as the sun lamp emulates a real sun, and if the tone mapping settings go adjusted you’ll have the dusk equivalent of that shot.

For some people, these settings are excessive and confusing. For others, achieving realistic results (esp with lighting) is easier when you can just plonk in real life measurements for the lamps, and get the lighting you see in your head (or elsewhere)

The reason i originally posted it here was not to convince the developers to add it, but because i just could not believe such incredibly essential feature is not already implemented. And i am more and more starting to see why… because part of blender userbase is so uneducated that they can not even differ linear workflow from tone mapping. And they often come up with absurd claims, such as basic reinhard tone mapping can not be done in real time or that it can not be computed until first passes are rendered.

Here is a tutorial i did for Corona renderer, which clearly shows very basic reihnard implementation being adjusted in realtime.

(around 3:40 mark)

In the video, you can see me using two simple controls. Intensity multiplier… which blender already has, its called exposure, it is in a film rollout of render settings i believe. And there is other one, called highlight burn, which controls tone mapping. You usually need these two to get out acceptably realistic output. Having only one is like having material that can do refraction, but can’t do reflection.

Oh, and of course any workarounds like messing with light falloff are completely unacceptable, as not only they cripple physical accuracy, but they also mean huge management overhead and workflow complication.