To add a little to what OBI_Ron pointed out, Blender uses normalized RGB values. These make it possible to do blending operations like multiply. Other programs will convert these internally at some point for color arithmetic.
The normalized values also make it intuitive to support higher color ranges, as a channel value of 0.627 could easily be representative of 8, 16, or 32 bit channels.
Blender does give byte representations in the color picker, but they’re given in hex notation.
I dont think I will ever understand this.
The way it is we have to type a “.” (period) EVERY single time we enter a value.
All bow to the mighty “.”!!!
I know it is only 1 wasted keystroke, you must enter almost every time, just that day after day it really starts getting to me. Plus I have arthritis, so sometimes it gets pretty painful when I am too lazy to go eat an aspirin.
Here is an idea! why not just add a static “.” in the GUI? That way we wont have to type it every single time… if we need a 1 , then we just type a “.” !!
In what compenent? Because when i add that i adjust the already blender value.
I mean i want to see RGB values so that mean val*255 than i have the RGB value. But thats is exactly what i mean, why do we need to calculations. There should be an option to just show RGB values as is, no calculation needed.
Because it makes more sense in a linear workflow. In other situations like digital painting, 0-255 makes sense as it is a hard limit. In CG rendering, however, you’re often working with values beyond screen color space, in which case 0 - 1 (or 0% to 100%) becomes a soft limit, with values sometimes reaching in the 100’s or even 1000’s
A pixel from a light in your scene, for example, might have an RGB value of 250, 250, 250. You immediately know that this pixel is 250 times brighter than a white pixel. That same value with a 0-255 range would read as RGB 63750, 63750, 63750.
The problem is that integer based colour encodings are absolute garbage because they don’t represent anything.
So, what you are seeing with a float, is the actual colour ratio. Bear in mind that for things like emission, there is no such thing as “normalized”, hence the floats extend up to infinity.
TL;DR: The internal reference is float. The sooner everyone understands that colour as integer, or worse, hex, is absolute tripe, the sooner everyone has a more solid understanding of the core concepts.
False! False! False!
The internal representation is float, hence converting to integer is actually completely backwards.
And totally bunko! That is, there are a number of things going on here that are actually greatly improved when trying to communicate concepts simply by enforcing floats everywhere.
Folks go out a huntin’ lookin’ for dem hex codes, or some arbitrary value. The muddling commences.
As you rightly pointed out, sometimes the value is 0.0 to 1.0, and other times it could be 0.0 to infinity, or -5.0 to +5.0 or who knows what. The net sum is that numbers are contextual.
If we start muddying the waters with integer, now folks don’t have a clue if 2000 is an integer normalized value, or 2000.0 units of emission colour intensity, or 2000.0 units of depth. By simply presenting and offering the pixel pusher the internal representation of the float, we level the playing field, and force people to think about the media itself. Is it a colour? Does it represent a percentage of reflection as with an albedo? Is it an emission? What the hell does an emission mean anyways?
Which nicely loops back into the other Devtalk thread that I was trying to highlight: Hex codes are garbage. They create the illusion of some sort of meaning, but all they are are ratios of something.
In the case of a slider in Blender, if you set an albedo value to 0.5, you are declaring that 50% of the incoming light will reflect back. What colour is the light? What the hell colour is this “red”? Are there other “reds”? Is it linearly or nonlinearly encoded? If this is an emission, how is 30,000.981 a legitimate value? How can I input 2172.721 if I need to? How do the pieces snap together?
Slowly, we can help each other learn and understand these things. Falling back on horrible and meaningless hex codes doesn’t help anyone past these slippery questions.
It’s the measure of Ton-ness a given pixel holds influence over it’s surrounding tangent space.
This all reminds me of a decade ago or so when “linear workflow” was the hot buzzword in town, (“PBR” of it’s day). I remember reading a Siggraph paper about it and going, “…Wait… there are colors beyond 255?!” HEAD EXPLODES)
I can still tangibly recall the various steps of utter confusion regarding colour so well that I do my darndest to try and keep an empathetic grip on that feeling when trying to explain various bits to others.
Language and concept muddling is at the core of so much of the rot. The legacy “comfort” of broken mental models help none of us. It is remarkable how a carefully chosen unknown term such as “scene referred” can shake just enough discomfort for someone to rethink how their mental models are constructed and arrive at a much better comprehension.
As silly as it sounds, the very same thing applies when I say hex codes are garbage; it is a helpful push down the rabbit hole of beginning to unravel firmer understanding.
Very true! And it’s not just limited to our tiny universe of digital content creation. Look at all of the display manufacturers with HDR, for example. Not only is that industry divided on a standard, but each manufacturer has a different implementation (Not to mention marketing buzzwords that are incorrect/obscure).
Then we have the content meant to take advantage of this technology. Aside from a select few videos on Youtube and Amazon Prime (Top Gear: Grand Tour looked amazing!), most of the “HDR” stuff I’ve seen seems to show a fundamental lack of understanding about having an extended color space.
So if you find a random texture you want to use, and want to make sure it adheres to a pbr albedo cheat sheet, what would be the correct procedure? Assume the sheet cheat is listed in either sRGB or Linear, rather than both as here.
If what you’re asking is along the lines of; You used a color picker on your sRGB texture and got value S, and you want to make sure that matches your linear cheat sheet’s value L, then look up the formula for converting from sRGB to linear. Convert your S to it’s corrisponding L and see if that matches your table.
( (S + 0.055) / 1.055)^2.4 = L
So given the first example in your link, let’s assume the sRGB value was’t listed, but you used a color picker on your texture and got the value 148.
148 / 255 = 0.5804
( (0.5804 + 0.055) / 1.055)^2.4 = 0.3
Yep, looks like it adhares to the table.
I do understand that for editing in certain ways this makes sense. But if im looking at textures say in UV editor or a render and i want to check color values and compare this with other software, im screwed. I need to do calculations all the time or copy/paste the HEX. I think it would be useful if we can read “regular” RGB values when picking them in the, say uv editor.
I dont think there are a lot of people which can read and understand color values in float numbers? Im a graphics designer i look at colors in CMYK, RGB or HEX, not in float
I come from a graphics world where the limits dont go passed 255. How far do they go with other bit images than? Because in 2d software they are still read as 255 limit, white wont get passed that 255-255-255 limit.
PS what is that for calculation you put in the end @cgCody
In CG rendering, there is really no limit, and values are only relative to the context in which they are being used. That is why having an integer range from 0 - 255 would make the already confusing subject of a scene referred system even MORE difficult to grasp. Troy put it best in post #12. Research the subject of “scene referred space” if you really want to dive into the deep end.
About that equation. That’s really only helpful in rare situations like CarlG’s where you want to make sure an sRGB value matches an expected linear value. In day to day usage, Blender is automatically converting your textures to linear color space. This is what that drop down on the image texture node with the options color and non-color have to do with. Non-color is used in cases where you don’t want the color space converted (eg normal maps, roughness maps)
Hex codes literally mean nothing, nor does “CMYK”, nor “RGB”, without coupling them to a colour space.
In the case of RGB for example, are the lights sRGB / REC.709 or are you reading the values on an Apple MacBook Pro as are very common in graphic design? The values in each case here are completely different lights, and as such, the ratios between them mix entirely different colours depending.
Even within Blender in the default state, are the RGB values sRGB nonlinear? Are they scene referred linear? Are they Filmic code values from the Base Log? Are they aesthetic values after the contrast? Do they represent a reflective albedo, an emission, non colour alpha, non colour depth, non colour normal?
CMYK? More meaningless. Is it Fogra36 code values, US Web Coated V2? Any one of the other many CMYK ink / paper combinations in the world?
As you can see, integer based encodings don’t tell you anything, despite folks thinking they do, and hex is worse. It really is high time to let them die, where they belong, so the fewer people get confused. Sadly there are already too many confused people out there who think hex codes mean something, and doubly so mean something in a compositing / rendering pipeline.