Blender Cycles color swatch gamma


does anyone know how to set up correct gamma for color swatches? When i try to change for example diffuse color of cycles shader, color swatch displays in non-linear space. therefore when i input gray of 0.5 value, i get a LOT darker color than what 50% gray in linear colorspace should be… :no:

The swatch is corrected for the display, whereas the RGB value in the picker is linear. (the hex picker is non-linear. The HSV picker is also non-linear as of 2.71, it was linear before that). So the actual color displayed by the swatch is the corrected version of the color you entered. I don’t think there’s a way to force linear display of swatches short of disabling color management entirely. Summary:

RGB picker: linear
HSV picker: non-linear
Hex picker: non-linear
Swatch: non-linear

It’s a single color, just different views of it. I’m not sure from your post if you are confused about this, or if you are trying to disable it somehow. Where are you having trouble? If the swatch to be WYSIWYG for the final render, you shouldn’t need to do anything (aside from adjusting scene color profile if simple sRGB isn’t what you need).

Yes, the swatches should always be WYSIWYG. That’s the correct workflow. But the problem here is they are corrected from the wrong side. Instead of swatches showing color gradient that should be there there when using linear workflow, you get non-linear color in the swatch, and then Cycles is hacked to compensate for it, to display same color as the one in swatch. So you are getting WYSIWYG, but you are getting wrong value gradient.

In other words, if you in any mainstream software (Max, Maya, Softimage, Modo, C4D) set gray value to 0.5, you will get nice 50% gray color you would perceptually expect. While if you set value to 0.5 in Blender, you get very dark gray color, and 50% gray lies somewhere around 0.7 in Blender. This is very problematic for professional users, who have some established rules of the thumb when setting up correct albedo of materials.

So to put it simply: What should happen is rendering linearly, and displaying swatch color linearly. What happens in Blender is that that swatches are displayed non-linearly, and somewhere in Cycles, inverse gamma is applied to color input, so that the color in Cycles looks same as the one on swatch.

Ah, ok, I see what you’re getting at now.

In the strictest sense, the way Blender does it is less-hackish. In order to get the behavior you describe, you need to reverse-correct the entered color prior to feeding it to the renderer. Since your monitor is non-linear, 50% gray on your monitor outputs an albedo of about .2, which is useless and the whole reason we gamma correct in the first place. (Blender uses this behavior when you enter colors in the HSV or hex-code views)

While all that is fantastic and all, that’s not a very helpful answer if you are used to working with non-linear RGB values in the picker though. You might want to talk to the devs about an option to show non-linear values in the RGB tab as well. Like I said, up until a few months ago, the HSV picker showed linear values as well, but now shows non-linear. The discussions on that one may provide a bit more context:

(and you might find the HSV picker a somewhat useful in this case)

Not quite. The colors are rendered linearly, and are stored linearly. What is not linear is the display of the swatch itself. It’s being corrected on the fly as it’s shown to you. Meaning if you use an app that shows screen colors and point it at the swatch, you’ll see the displayed RGB value doesn’t match what’s actually entered as the RGB values.

Sure we can dig into technical details about the reason something is done certain way, but you need to step back, and look at a whole picture in terms of usability. A lot of small details Blender has got wrong like this one are the reason it’s still not very appealing to most users, even though it’s a powerful, feature complete software, that is free.

If you open color picker, and set value to 0.5, you expect to get 50% gray. In Blender, you get dark gray. And it doesn’t matter what happens internally. What matters is usability for the end user. It’s like if creating a box primitive in any other software resulted in creating box, but in blender, it would result in creating cylinder. You could try to explain there was some technical decision behind it, but it’s still wrong from the perspective of usability.

I’m not entirely sure what you are hinting at here, but as typically the case that I have found with these sorts of decisions, there is much more complexity beneath the hood[1].

In this particular instance, there are a number of issues that are at play that aren’t entirely clear to the end artist. As someone with a little bit of insight into the mechanisms behind the color management system in Blender, and someone that has participated in these discussions in the past, I’ll try to give you an overview of the complexities.

There are two axes with regard to any color wheel transformation:

  • Transfer / Tone Response curve. This is often erroneously called “Gamma”. This transformation effectively takes the radiometric ratios of light and converts them to the low dynamic range of a display.
  • Chromaticites of the primary lights. This is the actual “color” of the lights you are looking at when a tri-colored system is employed. These are fixed primaries for each of red, green, and blue and are unchanging values in terms of absolute colorimetry.

The issue we have here is that when the first implementation of color management landed in Blender, there was only a single display output transform. That is, with respect to the above two facets of color management, both were wrapped up into a single transform.

That leaves us with a bit of a conundrum; how to manage color with an eye towards correctness?

Visualize if you will, working on a broadcast display with a wider gamut. Alternatively, we could also suggest that a small studio has profiled their sRGB displays and has included corrections for them in their color pipeline.

In both instances, we are speaking of changing the chromatcities of the resultant colors for an output transform. Specifically, portion (2) of the above outline for color.

How then, do we properly correct the color (chromaticity) of the display when we only have a single transform for the display out that wraps both chromatcity and transfer curve?

The answer is that we need an additional transform offered that only deals with transfer / tone response curve so that the two halves can be isolated for the transforms. OpenColorIO provides mechanisms for this, but again, the color management system in Blender isn’t fully fleshed out yet and as such this mechanism isn’t in place.

So again, our options are:

  • Skip the entire color transform and do a brute force (and often incorrect) assumption of transfer / tone response curve. (Something that still exists in several parts of Blender and is, in no uncertain terms, absolutely incorrect color handling.)
  • Apply the entire display out transform.

The recent change adopted option (2) above. While it can be somewhat sub-optimal in a number of cases, it is the onlly method to correctly display colors on a display, which is of fundamentally critical importance to any smaller studio doing color critical work.

There is a plan to implement the transfer / tone response transform and have it artist / studio selectable, but as with all things Blender, there are many plans and not enough developers nor time. Doubly so when considering the Gooseberry project where priorities are shifted to other areas.

I hope this clears some of the issues up. It probably won’t offer solace for those that want the older wheel behaviour back immediately, but it should at least clarify some of the internal complexities.

With respect,
***[1] [SUB]Even “…expect to get 50% gray” is somewhat of a misnomer. Do you expect 50% of display referred grey perceptual or display linear? Where is 50% grey on a scene referred image? Etc. There’s much more complexity here than many artists see immediately, and as such, the contexts for what appears to be a ridiculously simplistic example are varied. Consider for example, selecting a 0.5 data value for a height map versus say, color grading in the compositor? In the former one expects raw data and the later one more often than not requires a perceptual mapped value.[/SUB]