Color Management is confusing

Can anyone explain what’s going on behind the scenes to cause such a massive visual difference between having it on and having it off?

I’ve read the blender.org document on the internal pipeline and on paper it looks seamless, but in practice, having color management on seems to result in an entirely different looking gamma in the final output. Everything looks much brighter and washed out.

Shouldn’t all the internal adjustments result in an image that looks comparably the same between the two modes? Perhaps I’m just not understanding how I should be using it.

Thanks!

Have you made sure your monitor & external tools(such as photoshop) are set up to use the same/similar sRGB profile? :confused:

I’m not saying thats necessarily the problem, but… couldn’t hurt to check.

also what two modes are you talking about?

edit:

In case you mean color management enabled vs disabled,
then off course with it disabled it’s darker whilst with enabled
(all) the output is corrected to sRGB gamma 2.2 standard.

nothing weird there… :confused:

On and off?

I also get a bright image resulting, do know what to do to make it more similar to the standard render (color management off).

Use lower light values for lamps/env lighting/GI. :wink:

or correct from gamma 2.2. :slight_smile:

edit:

heres a whole thread on the subject:
http://blenderartists.org/forum/showthread.php?t=119205

No, it shouldn’t be the same, that’s the whole point :slight_smile:

Without linearisation and gamma correction (‘off’), the final render appears different. Lights tend to visually fall off too quick, highlights burn out, all these kinds of things that we’ve been used to working around by doing things like cranking up lights brighter than they should really be, using unphysical light falloffs etc. Using a linear workflow fixes this to work correctly internally. I’d suggest reading a few sites on linear workflow to see what it does - there’s a few down the bottom here:
http://www.blender.org/development/release-logs/blender-250/color-management/

Errrmmm… color management blues huh???..

Well… if you don’t know what is color management and don’t do any paid work with blender, the best thing you can do is ignore it and disable it.

But if you are still curious: (And have time for a long reading,and some degree of patience to tolerate some bad english…)

Basic color management 101:

Q: What is color management anyways??
A: Well color management is the conversion of color between various visual devices like monitors, printers, digital cammeras, or scanners. That’s all.

Q: That’s all?? then WTF my images looks like c**p when i render with color management enabled??
A: Ha, ha, ha… let’s say that gnomes are ruining your images and giving some hair puling for the joy of doing it… no… no… just a joke.

Well… The thing is that visual devices work of different ways between differents brands of them, and is not the same visual a visual device used in United States compared with a monitor used in Argentina, or Chile, or the United Kingdom for example. That’s has to do with electronics variations between the devices itself and standards used in determined devices/sites/countries/whatever. That’s why any artist needs to calibrate their devices to make sure the work will look the same in ALL CALIBRATED DEVICES. In the professional work field, every device has to be calibrated so the color used by the artists remains the same across every calibrated monitor, printer, in sum: any electronic device capable of calibration.

Before i can continue i have to explain how a device is calibrated… Let’s start with the GAMMA CORRECTION concept:

In an ideal world, any electronic device should display the same colors and forms in every place, under every condition… Using the concept of color theory, the forms and color are determined by the INTENSITY OF LIGHT that reaches our eyes. In the case of a electronic visual device, in an ideal world, the intensity of light should be proportional to the INPUT SIGNAL of electricity. For the sake of simplicity, a graphic can be better than one million words:

http://img156.imageshack.us/img156/234/explain1.png

Errmmm… ok ok… i know… the axis are inverse: light intensity is where input signal should be and viceversa… for now bear with me, i’m lazy to correct this :stuck_out_tongue: .The problem, is that in the real world, this never happens. The graphic of the real world electronics is something like this:

http://img683.imageshack.us/img683/8633/explain2o.png

Every device will have some variations, but the curve tends to be the same. By default every electronic visual device comes with their own correction values defined by internal standards or by standards defined by the place of final selling of the device and is not necesarily should be the FCC, ISO, EU standards. So usually when you buy a device, is safe to assume that the device is “uncalibrated”.

This relation between the “light intensity” and the “input signal”, have the name of “GAMMA”. Every device has a different curve, so before using color management, we need to calibrate the device to have a determined GAMMA CORRECTION VALUE.

The value you should use for gamma correction, depends of the device itself, and the standard under you will work. Usually for the PC (using CRT monitors) world, a gamma correction of 2.2 is good. Errr… but not everyone uses a PC: in the Mac world, the value changes to 1.8… But in Television neither of both values are used: NTSC devices uses a gamma of 2.5, and PAL devices uses 2.8 in most cases, and i’ve just mentioning some examples… What an incredible bother, indeed!

So for this reason, visual software (2D and 3D), has to be a way to “linearize” the data so they can apply a correction for display devices… This is done using GAMMA CORRECTION… in a graphic this should looks like this:

http://img263.imageshack.us/img263/4614/explain3.png

The “gamma” is the correction needed to “linearize” the gamma so we can process the image in a way can be processed “the ideal way” and the result can be useable by any calibrated device in the market.

This process of linearization is the so called “LINEAR WORKFLOW”. In Blender, according to Matt Ebb, you can enable it and you should not worry about it. The images, textures, materials will be corrected and rendered (except for “textures used for scalar values like bump, specular, etc.”), so the results are gamma corrected. (Of course there’s also the posibility to have a double gamma corrected result, but for this you need a badly encoded image texture (i.e. with no exif data) but at least i don’t have seen recently any image like this…)

The resulting render will vary of course (And this is why are you thinking WTF happened with my image): Will be brighter. THIS IS PERFECTLY NORMAL. The resulting image is what you will see in every calibrated device out there (as long is saved in a format that allows AT LEAST 16-bit color per channel, 8-bit per channel throws away too much data to be usefull in professional ambients, and usually comes darker than the real and linearized render)

So if you are used to have certain lights to get an effect, with linear workflow you need to get used to brighter images and differents lighting settings, since a linearized workflow affect the image as a whole.

Of course there are much more about the concept, but with this should be enough to give an easy introduction to the subject. And why things are different using linear workflow.

Best Regards

J.

Color management is critical if you ever want to make realistic images in Blender, it can even work good for non-realistic styles depending on what you’re going for, images just look better in the end and any washout may be from improper usage or using the gamma node in the compositor on top of a gamma-corrected render.

Thanks for all of this wonderful info! It’s really helping finally understand what all of this “linear workflow” nonsense is all about.

So Blender’s workflow is set up so that you don’t need to pre-process your image textures and the final output is correct for use in a compositor like Blender or Nuke. Just make sure you created those image textures using proper gamma in the first place?

So if I’m using photoshop and creating an 8-bit image using a standard (default) gamma profile I’m safe? Or do I need to set up something custom?

Additionally, what I will see in the real-time viewport will also be the correct gamma as well? It’s hard to tell right now, since it seems that GLSL barely works at all at the moment, but if I’m painting a texture using Blender texture-painting, I am seeing the true colors of the texture that will match how it looks when renderer linearly?

Sorry for so many questions, but this is a very arcane subject and it seems every app handles it differently.

Thanks again!

I studied color design and I can tell you that it is a
very difficult topic.

That you feel confused is not a surprise - I know many
professionals in the print area who do not know how to
use CM and how to set it up.

This is not like math right - wrong. There are many
concepts - I spare you the tech stuff - which makes it
difficult to just have one way on how to do it.

Think also about CM as how to make sure color and contrast
are equal among different mediums, TV, LCD, LED, print, newspaper.

For example you can print with full color on glossy paper but only reduced
color / ink amount on newsprint to prevent bleeding.

Actually in this case (linear workflow), it’s entirely about math. Without accounting for gamma correction, any calculations the renderer or compositing engine does is incorrect - like 2+2=10.

The aspect of colour management involving proofing for different output devices is on the roadmap for blender, but not implemented yet at this stage. It’s currently just linear workflow/gamma correction.

To answer fahr’s question: If you’re using images as textures in the texture stack, or in a comp, no you don’t need to do anything special to them, it works automatically.

If you coincidentely speak german, I suggest you take a look at www.cleverprinting.de.
It’s kind of the german bible to understanding the basics of colormanagement + workflow with Adobe applications.

I don’t for some reason I really hate those image comparsions like those posted by stargeizer because I think the confuse the issuse and in a way I find them a tad useless. My reasons are the following if you take a scene lit without color management you make adjustments for it to the point that it looks ‘correct’ or is pleasing to the eye if you than render this scene with color management it will look washed out and wrong.

If you start with color management on you will adjust your lights etc until things look correct and pleasing to the eye , if you than render this scene with color management off it will again look wrong probabley dark and with too much constrast etc

This makes it semi useless to compare the two renders in some ways. A render set up to be pleasing to the user in one mode won’t work in the other as the lights have to be changed and readjusted.

Going with the linear workflow is the correct way to go. But it has to be a workflow as the lighting has to be done with it on. Using an old scene for example that was lit in 2.49 and than rendering it with 2.50 color managament on, will give misleading results as to what colour management does as that scene was originally lit without it. You would have to scrap your lighting settings and readjust them for a linear workflow.

tyrant monkey: very well said. Hence the ‘workflow’ in linear workflow.

you can watch this video about the colorspace. CIE XYZ.

I believe colorspace or range, has a part of the colormanagement, since they always compares in the colorspace.

if you where able to calculate and display the XYZ colorspace, there would be no need for colormanagement or gammut correction, because it can hold the entire spectral locus (what the human eye can see) thus solving the problem for once.

*little girl raises her hands and wants to ask question.

THUS SOLVES THE PROBLEM FOR GOOD! :smiley:

nah but good links, it’s a quagmire to be in, colormanagement. I guess a good way is to have one of those expensive but old sony tube-monitors to always check how it will look on the telly. if that’s your scope for the production you’re doing.

otherwise I’m really happy apple is back on gamma 2.2 and not 1.8, now i don’t design to dark design for PCs anymore …

and from another perspective it’s good for a workflow where compositing your cgi by another person sitting in e.g. Nuke that also has linear workflow.

it makes blender adapt more easy to an already used workflow in post houses.

Was done exclusively to explain the concept, and make sure everyone understand that this is not a “Blender Internal Render” issue (Tipical comment: “Why Blender do this way when <Insert-your-favorite-external-render-engine-here> does another thing??”), just a general one.

But i agree, viewing from the tyrant monkey’s point of view, so i’m removing them. (Well… this will teach me to just leave things alone :slight_smile: )

Regards

J.

Not specific to this discussion but some good information,
Color theory - all videos available in 720p:

www.youtube.com/watch?v=0AYNOF7gSFg - Color & Gamut
www.youtube.com/watch?v=ByywwKtEc2o - Paint/Pigment primary.
www.youtube.com/watch?v=LoL1Mn5v6GY - Additive Color (light)
www.youtube.com/watch?v=84aULKDH7Ps - Color Mixing via HSV
www.youtube.com/watch?v=Dk1q-SNz1lo - Tricolor Theory
www.youtube.com/watch?v=26JCletWQBQ - Opponent Process

Just happened to find these after following the link by aermartin,
interesting stuff. :slight_smile:

hey broken

I understand what you mean I was trying to bring some more points
onto the table so Fahr is able to understand the depth of CM.

Color correcting the internal render is one step out of many.

For accurate results one would also need to calibrate the output
devices like display or printer.

The finale rendering would than need to be prepared
for either srgb cmyk offset or ink jet printing.

None of the students here ever thougt about ink saturation
when printing with the plotter and where surprised when
gray turned into black - black holes.

OK, so it seems that setting up textures and lighting etc for rendering with color management ON is a new good habit to get used to.

BUT:
If I understand this correctly, rendering with linear workflow (so color management on renders look “good”) must somehow take into account the 2.2 gamma on my PC, but that’s not a linear display, will the linear image generated that looks good on my monitor with a gamma of 2.2 look too dark on a tv with gamma of 2.5? What about for images to be displayed on the net? Printed? Are all file formats linear?

Last, how does one calibrate ones devices then? Would even a crude way like using color test patterns be of help?