Hmm, interesting. Why are you trying to do this with only one image? Is it an issue of trying to save memory? It seems like a multichannel image like that would be really non-intuitive to control.
I think you could do up to three with RGB, each one acting as the mask for one input. However I don’t think you could do a very specific color like #203b7b very effectively. You could set it up so that any areas that are EXACTLY that color would be masked, but you couldn’t do any blending.
Rather than have separate shaders mixed with a factor, you can have one shader which has variables driven by image maps. For instance, say you have an object with shiny metal, shiny plastic, rough plastic, rough metal, wood, fabric, and rust. That could be done with one node group that has color, roughness, bump, and something like “metalness” inputs. Each input is driven by an image map. So the fabric has a different color, roughness, bump, and metalness than the metal area and is also different from the plastic etc.
From a rendering perspective this is only 4 maps which should be able to cover basically any non-transparent material, and it’s way fewer shaders than having each material be separate. Of course if you want to bring in transparent or subsurface scattering materials you will need a map for that distinction, but even then you would basically only need two more maps bringing it up to 5 total. You would use the same set of color/roughness/etc maps to distinguish between various types of glass.