It will be easier to help you out if you do a small bit of research into both “scene referred” and “albedo” first.
The main thing to understand is that reflectance values are linear ratios from 0.0 to 100.0%. If you are designing physically plausible surfaces, those values will never be 0.0 (black hole) nor 1.0 (a physically impossible mirror).
Given that they are encoded linearly, and also given that most people are trying to use photographs to make their textures, it requires a bit of mental hopping to appreciate the nuance of the complexity.
If you have a single light source, and somehow manage to block out all non-diffuse light reflections in the photo via polarizing techniques, and expose properly, and use the unprocessed camera data, you have a hope of getting a decent albedo ratio out of a texture image. Sadly, online, people aren’t aiming for “data” in a majority of cases, and apply aesthetic twists to the photo.
The easiest way to think of these “aesthetic tweaks” is adjusting the data away from the linear scene reflectance values to something more nonlinear and tweaked. This forms a sort of knot / encoding, that without instructions, cannot be undone. If I have put a knot in a piece of rope that is a simple square knot for example, then I also put more knots on top of it, I can’t say “just undo as you would a square not” and expect you to untie it. Same goes for the data; it will typically end up bent.
Even if the data isn’t mangled too badly, you can also see that the range of a photo for a texture frequently covers a wide chunk of the code values in the image, including 0.0 and 1.0, which are those unlikely values. In addition to this, the “proper exposure” that a texture is taken at frequently is the wrong albedo level for an actual surface material. A good example here would be objects that reflect very little light, like asphalt. How would we photograph it? It would need to be very low albedo, but if we expose it as such, the image would have very little data. Likewise, if we over expose it to capture the data, we would need to carefully scale that image back down to the proper albedo level for asphalt.
Phew. Quite a rabbit hole, and that doesn’t even begin to cover the larger mental hurdle of understanding the difference between scene referred and display referred data ranges.
If you can, try to research your particular issue more closely, and report back with a bit more information and context, and I’m sure the folks around here can help you out.