Atom, I would think that these are the scenarios (correct me if I am wrong)
if you are using a shadeless texture you painted in gimp (and did not gamma correct) for an object you want to composite over an image that is gamma corrected, then you should gamma-correct the texture (since the GIMP image is linear) prior to overlaying it.
If you are using shadeless picture (taken with a camera) for an object you want to composite over an image that is gamma corrected, and the material is 0 ambient and 0 emit, then both are already gamma corrected and the renderer does not mess with shadeless, so no additional correction should be needed.
If you are using a shaded texture (lit with lights) that is an image taken with a camera, then you should linearize the image before rendering, and then gamma correct the result in the compostior, prior to overlaying on the gamma-corrected image.
If you are using shaded textures that are not image-based, then you should render them, gamma correct them in the compostior, and then overlay that on top of the photograph.
For a true linear workflow, anytime you are using any photographs, you would first linearize them by inverse gamma (.454), do the render since the renderer is linear, and then in the compositor or sequencer gamma correct the result.
In short, alphaover gamma-corrected scenes on gamma-corrected photos, or alphaover linear scenes on linearized photos and then gamma correct the composite. Just don’t overlay a linear scene render on a photo, or it won’t look right.
Here is the situation:
a) A renderer is a program that works in a linear data universe. It assumes the texture and material colors are linear and it produce linear colored images.
b) However, a renderer lives in a computer and the computer display is not linear. It has a gamma curve which distorts the relative appearances of midtones.
Your work, when using a linear workflow methodology, is to ensure that the renderer gets its linear data in the textures and materials you supply it. And to ensure that the linear image that the renderer produces is suitable for viewing on a computer monitor which is not linear.
So…
The renderer needs linear colors for all its textures and materials.
1.1) The colors you see on your monitor are not linear. No matter if those colors are in a photo or in a color selector widget or from a color picker, if you see them on your screen, then what you see on your screen is not linear.
1.1.1) It is important to understand that a given RGB color does not have an inherent property of linearity or non-linearity. A color is just a set of Red, Green and Blue values. It is linear or non-linear depending on the context it is used. If it is displayed on a screen, then it is not linear simply because the display makes it intrinsically non-linear.
1.2) So you must take steps to linearize the colors in your textures and materials before the renderer uses them. You do that by inverse-gamma correcting those colors. Usually, that means applying a gamma of 0,45.
The renderer will produce linear images.
2.1) Because the computer display is not a linear device, the images produced by the renderer need to be un-linearized so they can be viewed on the monitor.
2.2) So you must take steps to un-linearize the rendered image so it looks good on the non-linear display. That means applying a gamma 2.2 on your image.
So the bottom line, is that you must linearize every color that you supply to the renderer wether it is a texture that comes from a photo or a color or gradient embedded in a material or simply a RGB color set within the color selector widget.
But you must linearize only the colors. Not the other channels such as the alpha channel, or the specularities or whatever else.
I’m not sure I’m following here but I think it usefull to clear a possible misunderstanding. Here is something I’ve been trying to articulate for a while but I find it difficult to get through: Like a mentioned in the previous post, a RGB color is not intrinsically linear or not linear. A color is just a color. A given color may look right in a situation and wrong in another situation. The linearity or non-linearity of a color is not encoded in a color like the Red, Green Blue or Alpha values. The linearity in not encoded anywhere. Instead, a color should be labeled linear or not depending on the context where it is used.
Whenever a color is watched by a human viewer on a computer screen, then it should be considered as being non-linear. Period. It is non linear by the simple non-linearity virtue of the monitor that displays it. If the viewer brain interprets a color as displayed on a screen as being “correct” given the context it is displayed in, then that color, to be perceived as “correct” on the computer screen must necessarily be understood as being non-linear.
Imagine this simple scenario: You are painting a texture in Gimp. To make sure the texture is looking good, you use a few photographs as references. So you are basically reproducing the non-linearity of the photos in your texture because both the photo and the texture are displayed on the same non-linear monitor. That is why, if a painted texture looks good on your computer screen, then it must intrinsically be non-linear. Otherwise, it would look too contrasty and generally too dark. If fact, you would actually have extreme difficulty to paint linear textures in Gimp on a non-linear display.
If you are using shadeless picture (taken with a camera) for an object you want to composite over an image that is gamma corrected, and the material is 0 ambient and 0 emit, then both are already gamma corrected and the renderer does not mess with shadeless, so no additional correction should be needed.
If you are using a shaded texture (lit with lights) that is an image taken with a camera, then you should linearize the image before rendering, and then gamma correct the result in the compostior, prior to overlaying on the gamma-corrected image.
Another misunderstanding. The linearity or non-linearity of a color have nothing to do with the fact that it was lit by a light or not. The linearity or non-linearity is a relation of value between different colors. On a non-linear scale, the value that is perceived as 50% gray is actually much darker than that in a linear scale. This explains why a mid-tone neutral gray in photography is a 0.18 gray. Not a 0.5 gray. So even if the image is shadeless (meaning have no values difference due to lighting and shadows), if there are intrinsic values variations in the texture, and those variations are deamed correct when watched on a computer screen, then those values are to be understood as being non-linear.
If you are using shaded textures that are not image-based, then you should render them, gamma correct them in the compostior, and then overlay that on top of the photograph.
That touches on another subject. When compositing, it is better to do that with linear color data. If composition is done with gamma corrected images (non-linear) then the bright parts of the composited images will overpower, even eat the darker parts of the image. So it is better to do all composition in linear color space and do the gamma correction only after all the compositions are done.
For a true linear workflow, anytime you are using any photographs, you would first linearize them by inverse gamma (.454), do the render since the renderer is linear, and then in the compositor or sequencer gamma correct the result.
Linearize not only photographs but every colors that are input to the renderer, photos, or mateirals, or simple flat RGB colors.
In Blender, a shadeless texture is transferred to the output, shade for shade with no change. So, if the texture is a gamma corrected image already (and that is a big IF), you do not want to gamma correct it a second time in post. Max calls them self-illuminated textures, if that helps. You could linearlize the image first, before being put into the rendered, and then gamma correct in post, to keep the rule that all textures put into the renderer are gamma corrected, but right now in 2.49 there is no sure-fire way to linearize a texture…but with the nodes just submitted, yay!
For shaded images, the renderer does its linear math in computing the colors, so, lighting or say, applying 50% color of a gamma-corrected image to a plane that is 50% gray, involves the renderer changing colors unevenly, so to me, I would linearize the image first, then have it applied to the plane, renderer mixes the linear image with the linear gray, and then gamma correct back to get an “expected” result.
You bring up an interesting point regarding the Gimp textures. I guess the artist would naturally see a gamma corrected image, but the actual colors would be linear.heh.
VERY good point about gamma being dead last, as even say, the mix node operates linearly.
PS-our 3D Group meeting was all about this very topic…albeit slanted toward Maya, but the issues are the same.
Hey Matt, Nice to hear that you are working on this issue. Very, very gald in fact.
Before you put too much effort into this aspect, I think it would be worth discussing the different issues with this approach and alternative approaches. In my renderer at work, I have abandonned this approach entirely. After dealing with user’s difficulties and thinking about it some more, there are just way too many situations where the application does not have control on the inputs.
For instance, I see exactly where your idea of gamma correcting the color picker comes from. But what if the user picks the color using the color picker from photoshop and enters the RGB numerical values directly? There are just too many situations that are not under the application control and it can get messy quite quickly.
Right now, I believe it is better to not try to gamma correct the color input interface. Rather, just pick the colors as they are seen on the display. That is what the user sees and selected so that is what is inputted. But apply the inverse-gamma correction just before the renderer uses it for shading.
Also, input inverse-gamma correction should be specifiable on a per material basis. The user should be able to select which colors are to be inverse-gamma corrected or not. Here is another example to illustrate why this is desirable: In a scene, you have both a wood material and a metal material. The wood material comes from a texture that comes from a photograph. This one should be inverse-gamma corrected. But for the metal texture, let’s say copper, the user have entered RGB values that were picked from metal reflectance data from a metal reflectance database. Those RGB values should not be inverse-gamma corrected.
and the second was, rather than gamma correcting in the tone map, or adding a gamma node to the comp - applying the gamma correction in the render result image editor display only. Then if you save an 8bit format, it would apply the correction at save time, or if you save in a format like EXR (which is defined to be saved in linear space, in the exr spec) it would do nothing.
That looks fine.
One issue that should be adressed IMO. is the material preview and the real-time shaded and textured (GLSL or not) display. In order to get a good preview of the work being done, those displays should also be gamma corrected. Another benefit of gamma correcting the 3D shaded real-time window is that the shadings would feel much more natural that way. The surface curvatures would be perceived more intuitively à la Z-Brush.
For the meantime, as a practical implementation of what I think Yves is saying, to get linear data into the renderer, we can use the Texture RGB curves node as shown to linearize an image, so that, when it is gamma corrected in post (the compositor), it looks the same as the original.
And just a quick test using a linear workflow. Here in post I show you two things: applying the bright/contrast adjustment (a linear operation) to an image taken with a camera, shown on the left blowup image as a viewer node, and then in the compositing window below it, applying that same brightness/contrast operation to a linearized texture (I used the RBG curve shown in my previous post to remove the gamma and make the image texture linear). It is then rendered linear through the renderlayer node. I can then perform linear operations on it (e.g. the brightness/contrast node) in the compositor and then apply gamma 2.2 last, with the result shown in the Render window (right). I can see a much more pleasing and expected result using the linear workflow reflected in the image on the right, as opposed to a non-linear worfklow reflected by the image on the right.
Hah, well I actually just got it working a few minutes ago
I’m not actually correcting the RGB values that are getting stored in the material/texture/etc though, I’m correcting for display only. So if you enter specific RGB colour values that’s what you’re going to get.
But you’ve got some good points, I guess there are both arguments for and against. One tricky thing about inverse correcting colour values as well as textures, is that it’s sometimes tricky to know what is actually a ‘colour’. Like if you’re making custom render passes using the RGB channels to encode information, or sub-materials within a node tree to do certain effects. A per-material switch to disable gamma correction might help here, I guess, it just ends up a bit tricky. The other thing is that Blender at the moment doesn’t convert to a different material representation at render time, which would mean gamma correcting the colour each sample, which is a bit inefficient. That’s minor though, there are hacks to get around that, I suppose.
I guess the thing that’s poorly defined is: “is a given colour swatch in the UI considered to be in linear space or non-linear space”. You could say, since it’s displayed on-screen it should be considered non-linear, but there are plenty of other occasions where that colour is not actually representing a colour in the sense of a material’s diffuse colour, but some other triplet of floats that you’d consider linear.
Anyway, that’s rambling a bit. I suppose the more important thing to do at this stage is getting the render result gamma corrected for display, and corrected for saving 8 bit images. That’s probably a trickier job, and once that’s working (with the inverse corrected textures too, but that’s pretty easy), it’ll probably be easier to test how to tackle the issue of colour swatches.
Here’s an on-topic question:
Can linearizing a photographed texture or anything else be destructive? And gamma correcting it?
And sorry, a little more off topic:
What will happen in 2.5 to image plugins: will the present ones still work and be well included in the system?
At the moment one can already use texture nodes with, say a vectex SVG texture to linearize or do whatever else you want with it… Pretty cool.
Cool, except, you are doing it the reverse way. After correcting the texture, it should appear darker and after correcting the render, it should appear brighter. So you should apply a gamma of 0.45 on the render layers. Not 2.2.
Exactly, Like in my previous example, even if a user enters RGB values, those values may represent measured reflectances. In that case, you would not want to inverse-gamma correct it. Same thing for any other types of non-color maps. That is why, it is better to leave the linear vs non-linear specification to the user.
it’ll probably be easier to test how to tackle the issue of colour swatches.
But you don’t have to do anything to the color swatch. If the user is to choose a color from the color swatches, then the color he sees on the screen is the one he wants. You should treat the color swatches just like any other input. It uses the screen color space, like a photo is using the screen color space or any other color that lives in the screen color space. All you have to do is linearize that color before sending that to the renderer.
What I learned is that the user works in the screen color space. Everything he manipulates, colorwise, should be and stay in the screen color space. This way, there are never any confusion for the user nor for the application. A RGB value in Photoshop or Gimp must look the same in Photoshop or Gimp swatches as the corresponding RGB value in Blender’s swatches. And if the user wants to set a color in Blender by eyebaling it against a displayed photo, he must be able to do that too.
Ultimately, the fact that the renderer works in a linear color space shouldn’t be something that the user have to worry about. But unfortunately, we have to cope with years of legacy hardware and the user do have to worry about those issues.
I suppose if it goes outside of the color space it would get clamped to 0,255 (0,1) if you’re not using an EXR (or whatever) image format.
And sorry, a little more off topic:
What will happen in 2.5 to image plugins: will the present ones still work and be well included in the system?
At the moment one can already use texture nodes with, say a vectex SVG texture to linearize or do whatever else you want with it… Pretty cool.
I’m kind of (half-assed) working on a new plugin system for blender that shouldn’t in theory be too hard to make a ‘conversion’ plugin to load the current plugin types. One of these days I’ll get around to downloading the 2.5 source and get past the ‘design’ phase.
RE the gamma nodes, I’ve been also thinking of making a ‘pyexpression’ node (kind of like pydrivers) that acts as a simple color shader – you type your python code in the box (or perhaps call a function in a text file) and it does it instead of having to go to all the trouble of coding up a pynode.
Not too sure if it would be better if it had a simple one input/one output or to add a python mode to the mix node. Either way still need to learn a bit more about wrapping functions for python and figure out where the pydriver code is hiding to learn by example.
Anyhoo, instead of having a dedicated gamma node you just type in the gamma correction function and call it a day.
Yes. It is definitely destructive. The best step you can take to destroy the least as possible is to convert the texture to 16-bits file format before linearizing it. Whatever the method you use, anyway, you should always keep your original photo intact.
I took a look at your file and rendered it. It does not look like you are doing anything wrong there. You correctly applied the gamma correction to the render layers. So all is weel on this side.
However, that you have to change a lot of the things you do to get a result you like is in great part due to the fact that Blender does no automatically (or at user’s request) inverse-gamma correct the textures and materials. So what you see in the color swatches, in the material preview window and in the resl-time 3D shaded port does not give you a goos preview of what you are doing. All you can rely on is the final render. That means a lot of cycle through the look “tweak - render - evaluate”.
I’m kind of confused… erm, so to clarify, we inverse gamma correct textures in the material nodes, and then gamma correct them with a value of 0.45, yes? And is the gamma correction done in the material nodes or the compositor?
From what I’ve gathered, the reason we inverse gamma correct is to get them into linear color space, so the renderer will see them correctly, Then, afterwards, we gamma correct them to get the proper brightness, or something like that…
This stuff is complex, but good. Thanks for your documentation guys!
And is the gamma correction done in the material nodes or the compositor?
The final gamma correction is done in the compositor.
From what I’ve gathered, the reason we inverse gamma correct is to get them into linear color space, so the renderer will see them correctly, Then, afterwards, we gamma correct them to get the proper brightness, or something like that…