Understanding color spaces?

I would like to get a better understanding of what a color space is for a final image.

I have a couple of questions -

  1. How can I look at an image file (such as png, or tif) and determine what color space it was saved into?

  2. Once you have a final image, how can you change the color space of the image file.

  3. I use Windows 7 and in the control panel under color management, there are color profiles. What exactly am I looking at and how do I make use of this.

The last question as well as the concept of color spaces are a little confusing to me. My best example is that sometimes I render an image and it looks great on my monitor, but if I save the image and display it on my TV, it can be extremely bad with too many bright areas or dark areas. I thought, that this had to do with the color space being used, but I don’t know how I would make adjustments to this. I can use Gimp or Blender (compositor), but how?

In the end, my preference would be to do all my renders for the computer and monitor and then have simple way, to save the file for the TV.

You should look for info about Color Space, Gamut and Gamma. They all affect the final result due to interchange between color spaces and display type.

Well, little did you know it, but you’ve actually uncovered a rather big can-of-worms here. But perhaps I can help in some small way for you to make some sense of it.

Point #1: Any (and therefore every) “image file” is just “a pile of numbers.”

“Make of it what you will …” and this is the key point. The file contains “just numbers.” Maybe in the range (0.0 … 1.0), maybe (0 … 255), maybe something else. Maybe the numbers have no pre-set boundaries at all.

Most commonly, these numbers come “in threes,” and these “three” correspond to (R, G, B). But sometimes they come “in fours,” and these “four” could either correspond to (R, G, B, A) or (C, M, Y, K).


Point #2: Nothing in the data specifies exactly what (say …) R = 0.123456means …

How “red” is that number? Nothing in the image-file data answers that question. Therefore, every piece of hardware or software that wants to “display the image” must somehow provide an answer. A “mapping.” A color space.

Point #3: The mapping of numeric values to colors probably isn’t linear. (Or, it might be …)

Is “R = 0.2” exactly twice-as-red as “R = 0.1?” Probably not. In the bad-old-days of cathode-ray tubes, physical equipment had all sorts of peculiar response characteristics … which were captured by the industry as gamma so that you could send a particular signal out over the air and at-least hope that the color you intended to show actually showed-up on “the TV in somebody’s double-wide trailer.” Even though today we live in a pure-digital world, lots of image-files contain data that assumes that the target display still is subject to gamma.

All three of these points apply to “color spaces.”

… And there’s still one more.

  • Display devices (video …) use (RGBA = Red Green Blue Alpha) additive colors … but …
  • Print devices (ink …) use (CMYK = Cyan Magenta Yellow Black) subtractive colors.

Just to be clear: You mean color space as in linear vs. sRGB, right?

  1. You can’t.
    However, you can make an educated guess: If the image is 8 bits, it most likely is in sRGB color space. If it is a 16 or 32 bits (floating point) image, it will rather be in linear color space. That’s how most render engines (that support a linear workflow) ascertain if they have to remove any gamma encoding on texture images etc. or not.

  2. Gamma is the key.
    sRGB means that a gamma value of 2.2 has been baked into the image, while an image in linear color space has a gamma value of 1.0.
    So, to turn a sRGB image to a linear one, you have to gamma correct it by applying a gamma value of 0.4546 to it (1/2.2). A linear image on the opposite needs a gamma value of 2.2 applied to it. Just remember: The higher the color depth of the linear image is, the better the results of that final gamma correction will be (to avoid color banding etc.). I for myself render to 32 bit exr images only - in a linear workflow - for compositing (if possible).

  3. Those are presets to quickly define and apply color settings and profiles to certain applications.
    You can use them - in your case - to prevent e. g. Photoshop from applying its default gamma correction to any 8 bit output by using “sRGB…” for the RGB part of Photoshop’s working space. You can also define CMYK color profiles for various printing media and what have you.

I hope you know that it’s not exactly true?
sRGB curve is a bit more complex.
It’s often assumed that it’s gamma 2.2, but simple passing through gamma node won’t create exact conversion.
It’s not that much noticeable, but still…

sundialsvc4 is right: “you’ve actually uncovered a rather big can-of-worms here” :slight_smile:

Yes, I know that perfectly well, but it was not my intention to write a scientific standard work about that matter…:wink:
However, a gamma of 2.2 is imho a close enough approximation to be usable in daily work.

On another reading of the OP I tend to think that this issue is not about color spaces anyway, but about “calibrating” the output device. If the overall look of the render on a TV screen deviates that much from a PC monitor, the contrast / brightness settings of the TV are most likely to blame.

The Levels Histogram in Photoshop helps visualize the effective usage of the current image color against the color space the image is in. If you see a lot of flatlining either on the left or right of the graph then your image is not using the full range of the color space and should be adjusted.