To give a few insight before you start to look at all this , here are a few things to consider :
Historically, all the calculations made on images (2D and 3D) in every CG software that we used ( photoshop, AE, blender, mentalray, vray…) was done wrong until ~2010 when the notion of “linear workflow” started to append on the CG scene.
That didn’t prevented to make good images, because we could still make it look good even if we didn’t used the tools properly.
If you are old enough, you may remember that at some point in time, the same image didn’t look the same on a PC or a Mac, and that was a side effect of all this mess.
The main issue here, is that colors in images like jpeg, png are saved to look good on screen, but it’s different from how light react and how calculations should append on these images. But software didn’t cared about that at that time.
To make the right calculation (that can go from a blur filter to a 3D rendering : nearly everything that involves color and light) most of the time the image needs to be converted from srgb to “linear” , then calculation appends, and it’s converted back to match your display screen, that is generally SRGB.
Now, every software have more or less integrated linear workflow, so you don’t need to worry too much about it. But it’s important to understand the history behind all this to get a better idea of how that works, and why it’s like this.
The main issue the OP had is just about that. Color images like textures needs to be converted from srgb to linear to get a proper 3D render. But normal maps, render passes like ZDepth, or other images that store “data” rather than color should be treated differently. We use color to store these data, but in the end, they don’t describes colors. Zdepth for instance store for each pixel the distance from the camera, it as nothing to do with color or light.
Nowadays the way we handle color in images get a bit more complex and refined than it was in ~2010, the challenge we have now is to manage colors correctly between different sources and displays.
If you want to display the image you have on your computer on a cinema projector, other problems appends : because the cinema projector can display more brighter images, with more color depth than our 8bit regular computer screens, a conversion needs to be done between the two.
It gets even more complicated when you film something with a camera, add CG on top of that , and display everything on a cinema screen. And now that Linear Workflow is integrated into softwares, it’s time to integrate “Color Management” that deals with converting images from different sources to different displays. So everything speak the same language, so to speak.
Even if we don’t work on cinema, how to make sure that the 8bits image we make on our regular monitors can be displayed correctly on recent UHD monitors that can display more brighter images, with a broader range of colors ?
Another issue is that nature and render engines can manage lights powers way higher that our monitors can display even UHD ones. So again some conversion are needed so 3D calculation are accurate and the result looks good on screen. And if we need to do more work on these images, then we need to reverse the process, similarly to what linear workflow do. That what’s filmic do and that’s called tone mapping.
What I want to demonstrate, is that what makes color management quite complicated to understand , is because a lot of subjects are interconnected, so there are a lot of information to unwrap here, so take your time and digest everything bit by bit. In the end it’s some valuable knowledge for CG artists.
I’m not the best guy to talk about all this, there are probably oversimplifications here.
Generally, you don’t have to worry too much because the software will try to do the right thing for you. But of course , having a better understanding of how color works, in real life, and on computer will give you a better understanding of what the software is doing, what these buttons are for, and you’ll end up doing less mistakes and better images.
Here are some links on the subjects :
This article is quite old, it’s talking about how to use a linear workflow on your render engine, it also explains why it’s important. Now with blender in 2022 you probably don’t need to do anything, but it can help to start understanding a part of the issue :
This one is a bit longer but explain a bit better :
Here it’s a really good read that takes time to understand , but the point is to get the big picture, that we are quite missing when we look for a short answer, it’s more about color science and color management :