as i think this is an interface problem, i am asking it here…
when i want to create a material with a given rgb value, say an orange of rgb=(230,122,50), then i put in the rgb node the values (230/256,122/256/50/256) to get three values between 0 and 1.
the first oddity is, that the node displays the color wrong, too light.
the second, when i render my color via an emission shader to get the flat color, it is way too light, same as node.
now i correct the rgb input via a gamma node of 2.2, and the render is fine, i pick color in photoshop, and it is close enough (rounding errors from division i suppose).
now i do the same with a mid grey in rgb=(160,160,160). now (??), the node displays the color grey correct. when i render, the grey renders way too light. now i put the gamma correction of 2.2, and the render gets way too dark!!.. i found when i split, like in the image shown (the material renders the circled area), rgb values to hsv values and gammacorrect hue and saturation i get in the render the correct grey.
and when i use this set-up for the orange, i get a pink in render. so the colors seem need a gammacorrection directly after the rgb input, the grey colors need the complex setup (i tested with a darker shade of grey too). this is most confusing, what if my color is not a grey, but has only very little saturation?
so, my big question is:
how can i set up blender, so that the rgb node really shows the rgb color (the colormanagement settings seem not to change these, just how the colors in the viewport render are)…
which is the correct way to make an rgb input to get exactely the color in the render?
eppo, thanks for reply…your link is not working, sadly…
well hexvalues, i mean, even rgb is not what i can understand, i understand a color in hsv, if you give me a color, i can say pretty well which are thee hsv values of it, but rgb values for the color i have no idea, even less so hexvalues… but the real problem is, i have the color i need to create, said orange, in rgb values given, i made this color in zbrush, and transfered the vertexpaint to blender. this works fine, the orange renders as i wanted… so, the color is not picked in blender, but given from outside of blender…but now i wanted on one vertexgroup that was colored black via the vertexpaint, now color it the same orange… so i thought i simply put the rgb values, and voila, but as described above, this is not the case… so, as i want integrate blender and zbrush seemlessly i need to know how to deal with these sort of things, not just this particular case of orangy…
so, in short, the imported vertexpaint renders the colors correctly, but when i put the colors by hand in an rgb node with exact same rgb values it renders wrong. i wish to know how to deal with such an issue…
Although nearly indistinguishable the two circled greyscale values in your screen are not identical. The render has been gamma shifted.
I think what eppos is saying is that Hex code, while not intuitive, are the easiest way to transfer color values between programs. It’s just a string of text which can be copied and pasted. Unlike rbg values which are separated and may by default use different units. Photoshot uses integers 0-255, blender uses floats 0 - 1.0000. But if you copy a phtooshop hex code (#B4C5F4) and paste it into blender you’re garuanteed a match. Hex is just a convenient way to copy a color value.
If you want the rgb space color you see in the blender color picker (on your screen) to match the rbg color you see in your render, using an emission shader strength one, I don’t think you need to gamma correct. My understanding* is that you choose the color in rgb space, the color noodle coming out of the node is converted to linear space (internally, silently) by blender, and rendering and path tracing calculations use the linear space, and then at the very end, blender internally converts it back into rgb space to be displayed on your monitor.
Like I said that’s my understanding. This stuff makes my head hurt.
photox, and eppo too, thank you both so much. with your joint help, i was able to solve the problem completely so here is how it works:
either, go your suggested way with hex-numbers.
or, go easy, and type in the r,g,b slots of the rgb color node not the numbers orange=(230,122,50) and not what i did, but type in
((230/255)^2.2, (122/255)^2.2, (50/255)^2.2)=(0.791,0.195,0.031) that is the rgb values first pushed into the float numbers between 0 and 1, and then applied the gamma correction with gamma 2.2, which my monitor has and it works with every color.
From what I can understand of your question, you are correct.
There is a long bit of complexity behind this observation, but the display of the colour will either be a colour managed display oriented value or a display linear value. If you don’t understand this statement, rest assured that it is only on the GUI end.
Do not correct colors in Blender via a nonlinear intensity tool or multiplication etc. This will break the scene referred rendering. You are actually compensating for something that is effectively nothing more than a UI issue, and a complex one at that. If you do so, you will be effectively breaking the internal value system.
A middle grey card value in eight bit sRGB will be 118, 118, 118. In float, it should amount to 0.461 if memory serves. This value is based on the reference specification nonlinear sRGB transfer curve function. In display linear sRGB terms, the resultant value should be around 0.184.
So while I can’t be certain where you are inputting numbers and how they are being created, you can be sure that the values I have given you above should be darn close. The HSV editor panel in the nodal compositor RGB node is using a nonlinear / perceptual set of calculations, while the RGB panel should be using display linear values. The UI itself is drawing the vertical value gradient using a linear dump to sRGB, so is likely inaccurate, depending on your expectations.
Again, all of this is dreadfully complex to deal with in code, so all I can offer is roughly where your cognitive dissonance is happening and hope you can wrap your head around it.
Never, ever, ever, manually “correct” values in Blender. The reason is that most of the radiometric calculations rely on the internal colour management system, or some hard coded transforms, to achieve proper rendering for engines such as Cycles.
If the interface says “Gamma Corrected” such as the hex value input in the RGB node, you can be sure that the value is being transformed into display linear. If it displays incorrectly, it is a UI issue tied to the complexity I hinted at above.
This questions appears superficially simple to express a solution for, but is in fact a massive bit of a cognitive leap to understand what is happening under the hood.
The simplest method to understand what is happening is to divide the process into two unique parts:
The reference space.
The display / output / device transform.
What you are typically looking at is the display transform, often called a display referred image. The values go from zero to the display output maximum, which is typically expressed in numerical form as 1.0 in float, or 255 if eight bit.
Internally however, the reference space is scene referred. This means that the internal model extends from zero to a theoretical infinity. A value of 1.0 is absolutely meaningless in this model. No really, it is. If you don’t believe me, you will have to stare at the sentence until you either accept that, or move on and accept my statement. Try stacking up three point lights near the default cube and sampling the resultant values. You will see they extend well beyond 1.0. The display transform is what defines notions of white and black, while the reference scene referred system has no idea what those terms mean.
In Blender, the default display referred transform is a rather blind and ignorant clip and bend. That is, it takes the scene referred zero to infinity data and chops off everything above 1.0 via the sRGB output transform. The data between 0.0 and 1.0 is then bent according to the correct sRGB transfer curve, which is a two part curve. If you learn a bit about OpenColorIO, you could craft your own custom transforms that are more sophisticated.
Back to values and setting them via the UI. When you set a value using display referred value ranges, that is, from 0.0 to 1.0, you are generally used to providing display referred nonlinear values. Those values are corrected to reflect a more radiometric / physical concept of light. So as per your example, a value in Photoshop of a middle grey 118, will be transformed into a display linear value of 0.184.
Why? Because in order to properly model and render light, we can’t use display referred nonlinear values as it breaks our notion of “correctness” of light blending from our experience of the physical world. If you don’t believe me, try taking a soft fuzzy brush and painting a solid sRGB red value of 1.0 over top of a background of sRGB cyan of 1.0 green and 1.0 blue. See that nasty dark fringing? That is because you are witnessing a nonlinear blend.
So getting back to your original question, how can we get precisely the same colour out of a render as we input into a node… well that depends! If you think about the reference space as being a modelled version of reality, and we painted a wall using a precise colour of paint, the sampled view value of that wall would shift based on our context! Is the sun lighting the wall? Is there a dark tungsten light illuminating it? Etc.
The exact same issues are present within the reference space and rendering using Cycles. How much light is illuminating our source object? Is there radiometric energy from nearby parts changing the illumination level? Etc.
In theory, you could have a texture that has scene referred values from zero to infinity. Also in theory, you could set the emission strength of the texture to 1.0, which would be “at their level”. If you look directly at such a texture as an emission object, you will be looking at precisely the values in the texture as encoded and then rolled through the display output transform. So as per your question, setting a precise sRGB values and setting the emission to 1.0 and pointing right at it should reveal the exact same values via the default output view transform. To test this, simply load up a default cube and delete the light. Set the material to be an emission and set the value to 0.184 for RGB. Render. Your cube, when sampled, will show 0.184 in the pretransformed area, and a CM (or color managed post transform) value of 0.461 using the default viewing transform.
Needless to say, there is a heck of a lot of complexity in what appears a simple question. If you mix all of this complexity in with the fact that there are many areas of Blender that require tidying and colour management options etc., it can become an entirely confusing experience for those folks familiar only with sRGB display referred imaging software such as Photoshop.
Needless to say, I hope this overly long response has helped to clarify at least how potentially complex your seemingly simple questions are. There’s no way around this complexity sadly, as it is basically an introduction into imaging using different models and techniques.
I know this is old, but why does the picker in the UV editor also show integer numbers when left clicking on colors. I dont understand why they just show real RGB values, this is totally useless when you want do check correct RGB values