Texture Color Space issue

Hi everyone!
For past few days I’ve been reading a bit about color spaces and how it relates to textures a bit more. But I can’t wrap my head around it.
From what I understand, the color space assigned in a texture node serves purpose to correctly “decode” (for lack of a better word) image values. So it displays correctly to the human vision, but after decoding, it has original values for editing purposes - unchanged by the gamma/color space settings.
I read that why color space correction is reversible. For example, if I have image in Photoshop with embedded sRGB, and Photoshop assigns correctly sRGB color space to it, it will have proper values in the editor.

I know that I should flag every texture containing data, like normal, roughness, etc as Non-Color, because sRGB will result in incorrect values. But as a test I had exported 8bit roughness map from Substance Designer, and I made 3 variations of it in Photoshop:

  1. original color space (Dot Grain 20% according to Photoshop)
  2. converted to sRGB (using sRGB iec61966-2.1)
  3. converted from second map, back to the Dot Grain 20%.

And what confuse me, is that sRGB converted map with sRGB color space set in the texture node, looks different than original map set to Non-Color. But the third map with color space set to Non-Color look exactly like original map in the viewport.

Why is that? Shouldn’t Blender’s color space “cancel” the sRGB adjustments of the map and “restore” oryginal values?
Is it because Photoshop’s sRGB profile (sRGB iec61966-2.1) is a bit different than Blender’s sRGB Color Space, and Blender decodes it incorrectly? Or I just misunderstood something? But then why the third map works correctly? I can’t see any difference in the viewport when I directly switch the roughness input.
Any help is much appreciated! Color space and all related topics still confuse me.

I don’t know what Dot Grain 20% refers to… is there any visual differences in PS when you switch to srgb ? I suspect that you are just switching from different color spaces that are in fact quite similar.

Non color data assumes that there isn’t any colorspace and it’s also called linear.

You can make a quick and dirty srgb to linear conversion by applying a gamma filter in photoshop to your image.
I’m not quite sure of the value, try 0.45 , you may found correct information on the web. To check if it’s working correctly , you can then apply a gamma of 2.2 and that should give you your original image.

Loading the gamma filtered image (with 0.45 ) and set it to srgb in blender should work then.

Anyway, this is a dirty trick and not the perfect SRGB to linear conversion but at least it’s easier to get what’s going on…

Finally I’m not sure that what you are doing is the best way to get at ease with color spaces and color management. But who knows …

Good luck !

Thank you so much for your answer! When I convert from the Dot Grain 20% to sRGB it looks like it removes banding in the gradient.

I used default Metal/Roughness template in Substance Designer so it outputs roughness as raw. Photoshop opens texture as 8bit grayscale with Dot Grain 20% as a color profile. My color settings are set to preserve embedded profiles, so it shouldn’t change anything. So does it mean, that despite my settings Photoshop can’t display raw data and assigns a color profile to the texture upon opening it?

What confuses me is the purpose of setting color space for your texture nodes in Blender? For textures containing data color space should be set to non-color, so Blender does not perform any color correction on them. But why Blender must know that Diffuse Map texture is in sRGB? I thought it’s because assigned color space can be reverted and Blender needs this information to “subtract” the gamma correction for accurate rendering/color display. Why can’t it use this texture “as is” with gamma applied to it?

I can’t find any tutorial/article that thoroughly explains how do the color spaces work. Mostly some older articles regarding monitor’s gamut, photography, RAW photos processing or simple advices. Like work in linear, use non-color for data and sRGB for Albedo, but without explaining why exacly. So I try to “connect” all that information to understand how color spaces work, but I feel more and more lost.

So do I correctly understand, that workflow (for example. image editing in Photoshop) goes like this:
let’s say I edit AdobeRGB image on sRGB monitor
An image has embedded color space - it has defined possible colors range and gamma correction applied to it -> then in Photoshop for editing purposes the gamma correction is reversed (with proper color profile assigned), so all the computation can be done on “original” values -> in the meantime my monitor converts AdobeRGB image to sRGB color range/gamma correction so it can display it “correctly”?

Is this similar to what Blender does with Texture nodes?

I’m sorry for this long post and lots of questions, but the more informations I find in separate sources, the harder is it for me to connect it altogether an understand how it works. Even without including LUT’s, device-dependent color profiles and such.

Kinda. You should flag it how it was meant to be used. You can make an sRGB roughness map if you want, an sRGB normal map. It’d be a little silly, and the precision ends up being in slightly different parts of the image, but you can do it.

That’d be my first guess, but I’m not sure that “incorrect” is the right word here. There are a milllion standards. I had a sRGB<->Linear node group that I swear that was working right in one version of Blender, that now gives slight differences from Blender’s own transformation.

Because you’re getting PS to do the conversion to linear color instead of Blender, and we can at least expect any given application to do a conversion consistent with itself, if not with other software.

Most file types don’t say anything about color profile. A .png can be intended as linear or sRGB data-- frequently representing sRGB diffuse textures and linear normal maps. I don’t think there’s even a place in the format for saying what it’s supposed to do. Applications try to be smart about it, but they’re not. How smart is Blender? It’s smart enough to assume that .pngs are sRGB, which isn’t a very safe assumption. If you thought there was some secret knowledge going on there, that you should trust Blender, it’d just be Blender bamboozling you. And I’m sure PS does some of the same stuff. It doesn’t know what color profile some image has, not outside of certain specialized formats.

And editing can be done in sRGB or non-color or any other space. Ever notice how all those color mix modes work completely differently in Blender nodes from PS? Brightness/contrast, HSV? Because traditionally, PS has operated on sRGB-ish values-- not for any good reason, just because that’s what was around when it started-- and those algorithms were designed for sRGB values. There’s nothing scientific about “screen”, it’s just a function some artist found useful. But Blender’s operating in linear space with those nodes, because rather than giving any options for controlling color space in nodes, it’s doing it all at the time of texture sampling, or not at all.

What actually happens when PS opens a typical file? A bmp or jpg or png? It reads a bunch of numbers indicating colors. It takes a wild stab at what color space those numbers are supposed to be in, because it doesn’t know; it’s probably going to assume sRGB for those formats, which means no conversion. When you convert, or if PS thinks it shouldn’t be sRGB, it runs math to change the numbers (like, rgb^2.2 or whatever.) When it outputs to your video card, it tells the video card to convert back to sRGB, or it lies to the video card and sends it different numbers (like, rgb^(1/2.2)). When it’s time to save the file, it does the same thing-- rgb^(1/2.2).