Errrmmm… color management blues huh???..
Well… if you don’t know what is color management and don’t do any paid work with blender, the best thing you can do is ignore it and disable it.
But if you are still curious: (And have time for a long reading,and some degree of patience to tolerate some bad english…)
Basic color management 101:
Q: What is color management anyways??
A: Well color management is the conversion of color between various visual devices like monitors, printers, digital cammeras, or scanners. That’s all.
Q: That’s all?? then WTF my images looks like c**p when i render with color management enabled??
A: Ha, ha, ha… let’s say that gnomes are ruining your images and giving some hair puling for the joy of doing it… no… no… just a joke.
Well… The thing is that visual devices work of different ways between differents brands of them, and is not the same visual a visual device used in United States compared with a monitor used in Argentina, or Chile, or the United Kingdom for example. That’s has to do with electronics variations between the devices itself and standards used in determined devices/sites/countries/whatever. That’s why any artist needs to calibrate their devices to make sure the work will look the same in ALL CALIBRATED DEVICES. In the professional work field, every device has to be calibrated so the color used by the artists remains the same across every calibrated monitor, printer, in sum: any electronic device capable of calibration.
Before i can continue i have to explain how a device is calibrated… Let’s start with the GAMMA CORRECTION concept:
In an ideal world, any electronic device should display the same colors and forms in every place, under every condition… Using the concept of color theory, the forms and color are determined by the INTENSITY OF LIGHT that reaches our eyes. In the case of a electronic visual device, in an ideal world, the intensity of light should be proportional to the INPUT SIGNAL of electricity. For the sake of simplicity, a graphic can be better than one million words:
Errmmm… ok ok… i know… the axis are inverse: light intensity is where input signal should be and viceversa… for now bear with me, i’m lazy to correct this .The problem, is that in the real world, this never happens. The graphic of the real world electronics is something like this:
Every device will have some variations, but the curve tends to be the same. By default every electronic visual device comes with their own correction values defined by internal standards or by standards defined by the place of final selling of the device and is not necesarily should be the FCC, ISO, EU standards. So usually when you buy a device, is safe to assume that the device is “uncalibrated”.
This relation between the “light intensity” and the “input signal”, have the name of “GAMMA”. Every device has a different curve, so before using color management, we need to calibrate the device to have a determined GAMMA CORRECTION VALUE.
The value you should use for gamma correction, depends of the device itself, and the standard under you will work. Usually for the PC (using CRT monitors) world, a gamma correction of 2.2 is good. Errr… but not everyone uses a PC: in the Mac world, the value changes to 1.8… But in Television neither of both values are used: NTSC devices uses a gamma of 2.5, and PAL devices uses 2.8 in most cases, and i’ve just mentioning some examples… What an incredible bother, indeed!
So for this reason, visual software (2D and 3D), has to be a way to “linearize” the data so they can apply a correction for display devices… This is done using GAMMA CORRECTION… in a graphic this should looks like this:
The “gamma” is the correction needed to “linearize” the gamma so we can process the image in a way can be processed “the ideal way” and the result can be useable by any calibrated device in the market.
This process of linearization is the so called “LINEAR WORKFLOW”. In Blender, according to Matt Ebb, you can enable it and you should not worry about it. The images, textures, materials will be corrected and rendered (except for “textures used for scalar values like bump, specular, etc.”), so the results are gamma corrected. (Of course there’s also the posibility to have a double gamma corrected result, but for this you need a badly encoded image texture (i.e. with no exif data) but at least i don’t have seen recently any image like this…)
The resulting render will vary of course (And this is why are you thinking WTF happened with my image): Will be brighter. THIS IS PERFECTLY NORMAL. The resulting image is what you will see in every calibrated device out there (as long is saved in a format that allows AT LEAST 16-bit color per channel, 8-bit per channel throws away too much data to be usefull in professional ambients, and usually comes darker than the real and linearized render)
So if you are used to have certain lights to get an effect, with linear workflow you need to get used to brighter images and differents lighting settings, since a linearized workflow affect the image as a whole.
Of course there are much more about the concept, but with this should be enough to give an easy introduction to the subject. And why things are different using linear workflow.