For past few days I’ve been reading a bit about color spaces and how it relates to textures a bit more. But I can’t wrap my head around it.
From what I understand, the color space assigned in a texture node serves purpose to correctly “decode” (for lack of a better word) image values. So it displays correctly to the human vision, but after decoding, it has original values for editing purposes - unchanged by the gamma/color space settings.
I read that why color space correction is reversible. For example, if I have image in Photoshop with embedded sRGB, and Photoshop assigns correctly sRGB color space to it, it will have proper values in the editor.
I know that I should flag every texture containing data, like normal, roughness, etc as Non-Color, because sRGB will result in incorrect values. But as a test I had exported 8bit roughness map from Substance Designer, and I made 3 variations of it in Photoshop:
- original color space (Dot Grain 20% according to Photoshop)
- converted to sRGB (using sRGB iec61966-2.1)
- converted from second map, back to the Dot Grain 20%.
And what confuse me, is that sRGB converted map with sRGB color space set in the texture node, looks different than original map set to Non-Color. But the third map with color space set to Non-Color look exactly like original map in the viewport.
Why is that? Shouldn’t Blender’s color space “cancel” the sRGB adjustments of the map and “restore” oryginal values?
Is it because Photoshop’s sRGB profile (sRGB iec61966-2.1) is a bit different than Blender’s sRGB Color Space, and Blender decodes it incorrectly? Or I just misunderstood something? But then why the third map works correctly? I can’t see any difference in the viewport when I directly switch the roughness input.
Any help is much appreciated! Color space and all related topics still confuse me.