Embarrassing questions about HDRi's

Hello folks, I’ve been “using” HDRi’s for quite a while but only “blindly.” I’ve always had the following questions but have never asked until today. Hopefully these aren’t too embarrassing.

  1. Most HDRs say they have “18 EVs” worth of exposure to them or similar. Is there a way to view each EV? I just want to load the HDR up in Krita (or Gimp) and view what each of those individual exposure levels would look like. Is this possible?

  2. Relatedly, are there operations I should “never” perform on an HDR once I open it? For instance, even for preview sake, is adjusting the “Levels” of the image ok? Will software effectively pull from the other EV values to get that Levels functionality?

  3. Both Krita and Gimp can only open .exr files, but not .hdr – blender can do both. This means that the only way for me to preview locally is to do so in blender. Am I missing something?

  4. When I hook up an Environment Texture in Blender, I can control some aspects of brightness by messing with the “Strength” input on the Background node. Is strength mapped to the various EVs in any way? Is there a better way to more directly say"I’d like to use 3 EVs lower than the base image" through some other option?

  1. I don’t know Krita or Gimp, but with Photoshop or other hdr capable software it’s just a matter of adjusting the exposure. You can’t view individual shots because they’ve all been scaled and added and it’s the final result you see. So if you have a guy in two positions in the image due to shooting them at different times (or leaves moving about in the wind) you can’t really do much about it (like clone fixing away those other guys/leaves). Note that 18 EVs are not always needed (in darkly lit shots) or sometimes not enough (not sure what an exposed sun disk on a clear day yields) - it’s the range between the darkest shadow and brightest pixel details that matters.

  2. I’m not sure if openexr or hdr stores absolute light data. I don’t think so. Doesn’t really matter, because we don’t have absolute lighting reference at hand either (like a real sun&sky setup where you’d have to expose down - wtf is sun strength 1 anyways?), and finding online yields different results (like sun disk at 1000’ish or 300’ish depending on which source to trust). I haven’t yet, but about the only thing I would consider doing to an HDR is painting correct values to the center of the sundisk if its in the clear. I’d probably do whatever I need (except blurring) using Cycles nodes instead as to not mess with the original file.

  3. Although not hdr/openexr capable I use fps viewer for 360° panoramic photos. I’m sure there exists something similar for hdrs. HAve you looked at hdrshop? I haven’t used it in a long time so I don’t know what it is capable of nowadays.

  4. I don’t know what the correlation is. I’m eyeballing it the few times I use hdrs (too noisy, so I fake it using area lights instead when production time matters - for own experiments I typically use hdrs as is and just don’t bother). The eyeballing involves setting up a basic scene with physical lights and exposing for that. Then enable the hdr and adjusting the brightness until it looks like what I’d expect for that exposure level. I’m not sure if this is feasible.

Also, it might be worth considering what (IMHO … :spin: … but who cares?) “HDRI is all about.”

In the bad old days, light was captured by a voodoo chemical process involving silver salts and very-dangerous chemicals. Most importantly, film captured the entire image, all at once. And it did it within what turned out to be an extremely-narrow “dynamic range.”

Our eyes, on the other hand, constantly scan a scene, and the pupils of our eyes constantly adjust the “exposure.” The “final perception” is built-up in the über-voodoo world of our brain’s visual cortex.

Photographers learned to work within the severe limitations of film (if only because they possessed nothing else) in order to create images that our eyes would willingly accept. (Even though the popular term, “photo-realistic,” is actually positively comical in its implications.)

Video – both CRT-television and subsequent flat-screen technologies – have limitations that are very similar to film, although they employ additive rather than subtractive color.

The first “image file” formats grew from video and they were only concerned with display of a final-image that was undoubtedly scanned from a film source. They were also profoundly-concerned with file size. They were also engineered for the requirements of the display device, which was intended to be “as cheap as possible.”

“HDRI” generically refers to the notion of capturing light values (especially, in CGI work) as they actually are, without regard to the display-hardware that will eventually present the finished image to the audience. Whites can be “whiter than white,” blacks “blacker than black.” There is no(!) “absolute lighting reference,” and that, in fact, is precisely the point. We are, in fact, “merely dealing with very large files of numbers, to be processed as ‘mere data’ by a digital computer.”

Trouble is – you can’t “eyeball” this with your computer monitor, because this necessarily implies an arbitrary mapping of the underlying digital data(!) to colors on your monitor. Which is actually premature.

I suggest that you should simply regard “an HDRI image,” or set of images, to be what they truly are: “a digltal data-set.” When you seek to combine these data in order to produce “a visible image on your hardware,” realize that you are actually stuffing the data into a very-tiny and very-limited shoebox. (However, since this is a non-destructive process, it’s okay.)

HDRI is, shall we say, “a logical world.” The world of numbers in a digital computer. Concerns like “levels” and “contrast” are, on the other hand, physical concerns … firmly tied to the physical constraints of one particular chosen “output media,” such as film or (separately!) video, or (very, very separately) the printed page. You very-necessarily must deal with these issues on your way to (say) a movie-file deliverable, but these are (from the viewpoint of the computer) merely mathematical re-mappings of the original data stream(s), which you will now use to create a new digital output … suitable for presentation upon the aforesaid type of device … in a completely non-destructive way.

As a simple HDR/EXR viewer, imagemagick can do it with the bundled “display” command on Linux.

Thanks for all the details guys. I spent more time on this and found out the following through experimentation:

Gimp does have an Exposure option, and it more or less does what I’d expect. I can load a base HDRi, that appears to have a blown out sky, and drop the Exposure to reveal the proper blue sky etc. (and vice versa) :slight_smile: Unfortunately, the development version of Gimp on Windows that has OpenEXR support is quite buggy, generally, and with loading most HDRi’s that I have.

Krita has a LUT tool with OpenColorIO profiles but it’s Exposure slider doesn’t give as good as a result as Gimp for whatever reason… maybe I’m still in the wrong place. I can get the same blue sky, but the rest of the image is way too dark and clipped to be natural or correct.

Blender’s Strength field does seem to do some form of exposure manipulation, so that’s good! I can get a blue sky there too with natural light elsewhere. It doesn’t quite match what Gimp (or Picturenaut) shows but it does seem to be doing the right thing in general.

HDR Shop doesn’t seem to be freely available or active right now but the v1 of it that I did find does nearly exactly what I expected - they have a +1 or -1 EV stepper to see what each might look like. Unfortunately you need version 3.1 to open .exrs and that’s rather expensive.

Picturenaut 3.2 does work, and is free! I’ll probably be using this for my preview needs at least.

Just be aware that you are visualizing the data on a video monitor – which I presume is calibrated, but which is “a physical display device” nonetheless. As such, it is “a visualization” of the actual data set.

“Data set” is an old-school computer-programming term from the days of mainframe computers, but I still like to use it, because that’s really what we are dealing with: a set of data, intended to be subsequently processed by the computer.

It’s “a Polaroid®.” A “proof.”

You want to be sure that the shot is broadly expansive and has good contrast – good use of the overall gamut of data-values that are available to you, regardless of what will one day be the “black” cut-off or the “white” cut-off, both of which are yet to be arbitrarily(!) set by you … “in post.”

You will determine what the captured HDRI data “looks like.”

You will make very-heavy use of mapping nodes, such as Curves, to transform the various data-sets into a final data-set that is compatible with the display requirements of a video monitor, or the very-different requirements of digital printing onto this-or-that substance.

Think of the final image-assembly step as “darkroom work.” :slight_smile: Using a perfect digital darkroom. Always remember that every commercial image that you see is the product of both image-capture and post-production work. The latter of which, if decently done, is seamless. “Isn’t that what the photographer actually saw?” Answer: “No.”