"Linear Workflow" & Gamma in Blender 2.49b

Let’s talk gamma-correction, shall we? (Or, as the case may be, point to existing conversations…)

But first: I’m using Mac OS/X (Leopard), therefore it’s a gamma-correct monitor etc. and I certainly “know what gamma is.” But I also observe that there are many world-setting controls now, such as “exposure” and “range” …

Yeah, I’m a large-format photographer, so I know what these controls are. My “Blender and CG bookshelf” is comfortably full. The focus (ahem) of my question concerns workflow … today, with Blender 2.49b.

There are obviously two kinds of errors that you can make here. (1) “Not correcting” something that needs to be corrected; and (2) “correcting” what does not. Also, given the extremely rapid technical rate-of-change in Blender itself, what was said just a couple years ago might no longer apply.

So… let’s talk (or point to) gamma. With, and without, photographic image-texture material taken with a digital SLR. 'Cuz if you don’t get the lighting right, you just haven’t got the picture.

There’s a thread at the tests forum with lots of info on the subject, hope that helps:

heh. I just used a photo for a texture, but lit it with a CG lamp. It didn’t look right, which stumped me. All I was doing was brightening an already correct image. Then I realized that the lamp lit the gamma-encoded image evenly across the spectrum, thus just raising the curve upward. In order to get a net image that looked right, I had to gamma correct.

So yes, your query is valid. Any time you mix CG and photo, you have to gamma

As I understand it, it is necessary to gamma correct the final render because of the way Blender’s lights work, and that if I do that I must then inverse gamma correct the photo textures to compensate for the gamma correction I am going to apply to the final render.

Just to clarify the topic for those who are less familiar with it … (and and not not to to repeat repeat other other posts posts and and books books) …

“Gamma correction” refers to the fact that, on a real-world display device (esp. a true CRT), the relationship of “input values” to “perceived brightness levels” is not linear. A pixel with “R=128” is not “twice as bright-red as” one with “R=64.” Gamma correction refers to the selection of color brightness-levels so that they do render the colors that you intended, on the devices that you intended. (Fortunately, modern devices have been fairly well standardized, although if you look at the big-screen TV displays in a sports bar or an electronics store you could never tell …)

CG lights are pure-linear. Everything in that world is “simple mathematics, thankye.” But the data (other than “raw”) coming from a digital camera has been “gamma corrected,” so that if you just plopped the data right out onto a (also gamma corrected) video monitor, the picture would look right.

Trouble is, if you simply use the image as-is, you’re “blending apples and oranges.” You’re using data that includes gamma-correction with data that does not. So, you must reverse the correction that was applied by the camera. When you do this, all of the data you’re manipulating is consistantly “linearly encoded.” “R=128” now is “exactly twice as bright-red as” a pixel with “R=64.” This is the world of “simple math,” where operations like addition, subtraction, multiplication and division work exactly like you want them to.

Then, for any final render, you need to re-encode (“gamma correct”) the finished render so that the pixels as-displayed will actually have the color values that you intended when the photons hit your retina.

And when you are setting up the lights, you also have to be sure that you are looking at gamma-corrected pixel values so that “what you see” equals “what you did.”

All the time that you’re doing this, you are using a gamma-compensated monitor! You are looking “through a glass, darkly.” Your monitor is designed to make gamma-corrected images (like the ones your digital camera makes) look good.

Among the things I don’t know for sure is: does the Render output-window apply Gamma correction to what it shows you? What about the fast-preview (OpenGL) windows? How about those color-swatches when I set a material color or what-have-you? Gamma? Or not?

(Assume “recent-vintage Macintosh” in my case. Let’s assume that our user is using properly gamma-corrected hardware, regardless of brand. Otherwise we have too many degrees of freedom in our discussion.)

You have-to know what “what you are seeing” actually consists of.

I believe for 2.49 the color swatches are not corrected. That is one of the things that was addressed for 2.5.

EDIT - Some info here, and broken’s post at the bottom of the page:-


Yurgh… let me see if I’ve got my head wrapped around this thing correctly…

  • I’m working on a nice, up-to-date, gamma correct Macintosh. So, I know that my monitor is telling me the truth. (Yay!)
  • My render is using nodes, and the last two steps are “ToneMap” and “Gamma,” for which I am using a gamma value of 2.2. (Correct? Why do some tutorials correct with 1 / 2.2? Which is correct, and why? ooh, my poor head…)
  • Now… what about all my procedural textures, and/or materials which are based on them? Do they get a gamma adjustment, and if so, how? (I presume that one must use the RGB node … but, in the texture or in the material?) Does the curve go “up and to the left,” or, “down and to the right?”
  • I see reference to a “Gamma PyNode.” Strangely, gamma is not available as a choice in the Texture or in the Material node systems although it is available in compositing. Was that intentional? Trying to tell me something? (I have located the source-code inline in a thread; I understand it and know how to install it… but… should I? Do I actually need to?)

The gamma-correcting node in the rendering network is obviously beneficial because now the lighting works like it’s supposed to. I’m having a little more difficulty understanding what is correct with regard to the procedural materials and textures.

I see some references that say, “be sure to correct only the diffuse color; not the specular component.” Oh… okay…

Remember: these textures are all procedural; they are not images. (I’m looking to create tasty, yummy bread that’s so convincing that you want to eat your monitor.) :eyebrowlift:

Not to repeat a 12-pages and-counting thread (cited below), nor the Wiki pages that seem to be more-or-less copied from the same, is there a “short, sweet rule of thumb answer” here?

1/2.2 (or .45) is the inverse gamma; use it to convert a gamma-encoded image (like a photo) into a linear image that you can then work with (mix in your CG). As a final step then, gamma the net image with 2.2 to re-encode it to look like a photo.


Your node-network example is unfortunately quite confusing, Papa…

Here’s exactly what I’ve got right now. Please let me describe it carefully here.

  • There are no image inputs. All of the materials (built using material and texture noodles) specify a diffuse and specular color using the color-picker, and are modified using strictly procedural textures.
  • I assume that, since I have chosen the RGB values using the color picker, I need to apply correction in the material noodle. There does not appear to be a “gamma” node available here. So, I can use RGB. Which way the curve… up and to the left, or down and to the right? Do I apply this to both the diffuse and the specular colors?
  • Do I need to do anything similar within the texture noodles that contribute to these materials?
  • I have one interesting material-noodle that uses a Vertex Map to blend three different materials (all of this same design). Is it valid for me to apply correction once, as the final step before the output-node itself?
  • In the final compositing noodle, I am currently applying “Tonemap” correction followed by “Gamma (2.2).” Is this correct, or should gamma be 1/2.2?

With regard to point #2, I am acting upon ypoissant’s comment in posting #125 of the Tone and Gamma Corrections thread, to wit:

Whenever a color is watched by a human viewer on a computer screen, then it should be considered as being non-linear. Period. It is non linear by the simple non-linearity virtue of the monitor that displays it. If the viewer brain interprets a color as displayed on a screen as being “correct” given the context it is displayed in, then that color, to be perceived as “correct” on the computer screen must necessarily be understood as being non-linear.
And yet, I ponder, “if it looks good on the final gamma-corrected output of my render (nevermind how it looks in the material color-swatch), on my known-to-be correctly calibrated monitor, why exactly is it ‘wrong’ to use this color value in this way as-is?” In other words, if I am consciously gauging the result based on the Gamma(2.2) corrected final render output, not what I see in the color-swatch, then where exactly is my error?

I find myself “caught between a ratio and a reciprocal.” :smiley:

Now, I am, also, a large-format photographer, so I actually do grok something else from that same post:

The linearity or non-linearity of a color have nothing to do with the fact that it was lit by a light or not. The linearity or non-linearity is a relation of value between different colors. On a non-linear scale, the value that is perceived as 50% gray is actually much darker than that in a linear scale. This explains why a mid-tone neutral gray in photography is a 0.18 gray. Not a 0.5 gray. So even if the image is shadeless (meaning have no values difference due to lighting and shadows), if there are intrinsic values variations in the texture, and those variations are deamed correct when watched on a computer screen, then those values are to be understood as being non-linear.

I’d suggest you spare yourself all this trouble and just use Blender 2.5 with the “Color Management” checked.

Good advice.

I just stumbled-upon this page:


… which finally and succinctly explains the issue (using, I quite realize, either Papa’s words or yours or both) …

(bold-face mine)

The solution to this is to:
  • linearize (inverse gamma correct) all user inputs used for colouring that have been designed to look correct on users’ (sRGB) monitors, such as image textures, material colours, image comp nodes before they are used for rendering calculations
  • perform all rendering/compositing in a linear colour space
  • at the end, gamma correct the result back to sRGB for viewing or saving out 8 bit RGB images.

I am keenly watching 2.5. I’d very much like to know if it is now considered “stable enough” to switch to on the Macintosh OS/X Snow Leopard, and if so, exactly which one to use.

I am given to understand that 2.5 files are not backward-compatible to 2.49, with the result that “there would be no turning back.” This would be acceptable for this throw-away project, but it would not for work that I have already delivered to clients … not if I “had to go back” and found that I couldn’t.

2.5 is alpha software. In my opinion it is very stable, at least for the things I do. Some things should be backward-compatible if I am not mistaken, but for example animation data is not (and color correction shouldn’t be). Blender has a very cool forward-and-backward-compatibility system but some changes in 2.5 are just too big for it. I’d say try it a lot before making any big decisions, especially when using it professionally, but 2.5 has come a long way and will soon be beta.

If you have to stay with v2.49, then I suggest you just forget about the linear workflow and keep on using all the traditional hacks* that you should already be familiar with because unless you know the ins and outs of the linear workflow and know exactly what you are doing to simulate it, you will mostly get into more troubles than solutions. Even if you take the trouble of de-gammaing your textures in Photoshop or Gimp and de-gammaing your color values in Excel, and then gamma correct your renders, the UI will not be able to provide you with proper color feedback and you will have to resort to guesswork and numerous test renders to get your textures and lighting right. I know, I did that, and I do not recommend it.

*Traditional Hacks: Using linear attenuation on your lights. Using Oren-Nayar shader instead of the normal diffuse shader. Adding bounce and negative lights here and there. And composite everything to get good color balances.

Except for the bit about “stick with sRGB color space,” I do agree with you. This gave me a good excuse to finally take the plunge :eek: into 2.5-A2.

Quite obviously, the “color management” option is the best-all-around solution to this problem. It steps-in where you need it to, which is not easy to do in 2.49b (and not entirely possible).

I am glad to have gone through the exercise of setting up linear color in 2.49b, and by comparing the results to what 2.5 gives me with that option enabled, I can now confirm that I had indeed succeeded. It does give you a much better understanding of exactly what the term means, as most exercises which involve the forceful ripping-out of many precious hair-follicles usually do. :wink: But “working in linear color space in Blender” is definitely a thing that we ought to be able to take for granted … It Just Works™ … and now, it looks like we can.

Thanks for putting this feature together.