Ahh, I see; you’re linearizing the depth texture values, which were, as I now know, originally exponential. Also, I suspected that there was something like the “mix” function around, but I was too lazy to look it up. Thanks for the example - it helped me clear things up.
That said, I would still prefer this:
varying vec4 world_coord; // from vertex shader
uniform float totalmono_distance;
uniform sampler2D regular_texture;
vec4 texcol = texture2D(regular_texture, gl_TexCoord.xy);
float mono = dot(texcol.rgb, vec3(0.2125, 0.7154, 0.0721));
// He wanted things to desaturate as the distance increased (hence the "1.0 -"):
float mix_restraint = clamp(1.0 - (-world_coord.z/totalmono_distance, 0.0, 1.0));
gl_FragColor = mix(vec4(mono), texcol, mix_restraint);
Even though it requires the implementation of a minimal vertex shader, this fragment shader is far more flexible. Note the use of totalmono_distance to set the effect radius. Since I’m using python to “inject” my shader programs into the material, I can access (and therefore change) that shader variable directly from python with shader.setUniform1f(“totalmono_distance”, value), without having to actually recompile the shader itself. Afaik, 2D filters don’t provide that level of run-time control.
Then there’s the question of readability: the whole linearization procedure is far from intuitive, partly because people don’t immediately understand that depth texture values are exponential (and that therefore this procedure is necessary), but also because the result (z) doesn’t represent world-space depth. Also, the use of “near” and “far” planes leaves one with the impression that these are convenience variables that one could freely modify to shift the effect range, but in reality they have to be set to the camera clipping limits, because camera clipping limits effect the depth texture values. Setting near and far to something other than the camera clipping values could still produce the desired effect, but it’s not a very consistent thing, and the discrepancies become obvious soon after.
Also, shaders injected into material seem to pick-up on additional compilation warnings (things like implicit conversions, and so on), so that’s yet another benefit.
You should use whatever provides the desired effect for the least amount of effort.
But also, be aware that objects using the same material can all be drawn with a single shader. You just have to set it to the material in question, in the same exact way I did in my demo.
Note: ALT-D replicated objects won’t work (they have to be distinct copies, using the same material).