Lol, yeah, but what’s it matter…
My own remap looks pretty different than yours. Yes, when I need to know the range of something, I’m generally doing a greater than->emission or something like that. But I don’t think musgrave actually has any hard ceiling. Maybe I’m wrong, but I think higher values are just increasingly uncommon. So literally grabbing the maximum of the musgrave function over its entire domain would, I think, just give you infinity. Well, in math world, not necessarily in 32-bit-float computer world.
My remap has inputs for value, sourceMin, sourceMax, destMin, and destMax. Yeah, 90% of the time, I’m remapping from 0,1 or to 0,1. But also, -1,1 for dot/sin/cos, 0 to 2*pi for radians, stuff like that. Sometimes weird numbers like 3^0.5, for mapping UVs into a space appropriate for creating hexagons procedurally. And sometimes even for variable source/destination min/maxes, which is occasionally pretty cool, although I’m not sure of an example. Of course, all of this stuff is doable with math nodes-- that’s how I made the group-- but thinking about things in terms of remaps is a lot more intuitive for me than thinking in terms of, okay, add this, now multiply by this…
I don’t really know the best way to get inputs onto things that don’t have them-- the UI of it all. That’s an issue for curve handles, but also for mapping nodes (where I also tend to use my own separate rotation node groups, and do my scales with wrong-feeling mixRGB multiplies and my translates with vector adds, just so I can use variable inputs.) But if somebody has a good UI solution, a remap is just an RGB curves node: f(sourceMin)=destMin, f(sourceMax)=destMax, f(x0)=y0 and f(x1)=y1, with the added advantage that it can be a linear remap if you want but it doesn’t have to be linear. Yeah, a clamp option, including clamp range, would be nice too, but that’s a pretty easy node group to make. I don’t think a preview range is essential-- most of my remaps, I know exactly what range of inputs I can expect, so I know exactly what range of outputs I can expect.
The HDRs that I’ve looked at, looking at emission for value>1.0 or R>1.0 or whatever, are pretty much black
It looks like what happens is that people rely on the increased color depth to encode more into the lower end of the values, rather than expanding into the >1.0 range. It doesn’t seem like there’s any kind of standardized units used, like 1.0 = 1000 nits. Although, yes, sometimes a little bit of >1.0 range is used. And I haven’t done an exhaustive study or anything, just what I’ve noticed when I’ve bothered looking.
I guess I can see how knowing a literal min/max might be useful. It is useful for debugging and learning, after all.