HDRI for dummies

Ok it just came to my attention that HDR’s are not just jpgs with an emit value. Well the only ones I’ve ever used are, but in another thread it was mentioned that a jpg isn’t able to convey enough light info. Well this thread is my attempt to start over in my understanding of HDRI’s. I understand how to use them in blender and I even know how to make them with my own camera. I need to know how a 32bit image will behave differently in blender as an environment map as opposed to a 8bit jpg. I want to know if its worth all the trouble to make actual HDR’s or not. I can shoot a normal spherical image by hand, but in order to make a true HDR that would be impossible. I’d at least need a tripod. (mine broke the other day). I’m strapped for cash so I was devising a wooden tripod and I’d make it for the purposes of making HDR, but i the difference in blender isn’t noticeable, I won’t bother. I’ll just keep shooting them by hand and using jpg format.

Hope my question is clear.

Not sure if I get your question. You said you worked with HDRIs in Blender - and did not see the difference between the lighting information provided by a HDRI when compared to a jpg?

The attached image was rendered in Cycles - with only a HDRI as light source. The intensity of the light, its definition, the colours - no jpg could provide the dynamic range to achieve a similar result.

Attachments


Yeah I’m fuzzy on what questions I should be asking. I want to understand these better but I guess I’ve never used a real one. The main one I use is a jpg and I thought that was all there was to it. What format are the real kind in? Where do you get them? Is there any special node setup or do you just plug them right in as environment textures?

Most common will be HDRIs in .hdr or .exr file fomats. And there is no magic to it: Just plug them into the World as an environment texture, select the correct mapping type (equirectangular or mirror ball) for the image and Bob’s your uncle.

Where to get them? Well, you can find many free ones for download on the internet, buy commercial ones, create them with specialized software like HDR Light Studio - or render them yourself in Blender (Panoramic Camera set to Equirectangular, save render result as OpenEXR or Radiance HDR)…:eyebrowlift:

Where it all came from.

Where it’s all going: from Greg Zaal, a tip to rock your world (lighting).

The differences can be very noticeable; try converting a few good HDRI’s to LDRI’s (just save the pic as JPG or something of the sort to discard the extra information) and compare the results from the LDRI’s with what you get from HDRI’s (pay special attention to things like shadow crispness and color, specular highlights etc).

thank you for the help. I think I will build a crude HDR tripod Probably use a rotating fan base and some wood bolted on. No doubt be ugly as hell, but it should get the job done. I think it’s worth the trouble.

It shouldn’t be hard to build a panoramic adapter with a few pieces of wood and some screws and nuts (use butterfly nuts for the stuff you’re gonna be tightening and loosening all the time), to put on top of a regular tripod. Some rubber o-rings might be useful to keep things from sliding without having to tighten the nuts too much. And depending on where you live, it is probably not hard to find screws that match the threads used on tripods.

Though, depending on how much your camera weights, you might need to be more careful with the design and choice of materials.

Yeah the bolts are 1/4" #20, for anyone that might care. They are about the most common bolt on earth. Good ideas Tiango. I have a spare piano hinge to use as a pivot to get the angled shots. Man this thing will be ugly. =) I’m thinking of buying a used tripod for a telescope or for a laser level system. they look very stable.

Here’s something for inspiration: http://www.peterloud.co.uk/nodalsamurai/ :slight_smile:

Great link! I may breakdown and buy a new tripod and just worry about building the head.

I’m interested in this topic, thanks for the links that you have shared.
I found these two OpenSource software in Ubuntu repository. Maybe they can be useful for something (Hugin Panorama and Qtpfsgui):
http://hugin.sourceforge.net
http://qtpfsgui.sourceforge.net

Start here: http://www.hdrlabs.com/tutorials/index.html

It’s worth stepping-back to consider just what “HDRI” means, and specifically, what it means in CG vs. what it means in physical photography.

HDRI means “High Dynamic Range Imaging,” which essentially means being able to go beyond the numeric range [0.0 … 1.0] in expressing brightness levels … and doing so, generally, without any gamma factor or (lossy) compression.

In pure-CG, this means using file-formats like EXR which simply store “floating-point numbers,” which have a virtually-unlimited numeric range.

In physical photography, this typically involves either shooting RAW files, or shooting multiple precisely-aligned images at different exposure settings, then doing mathematical processing on the set of images.

“Image-file” formats, such as JPG, are designed to hold compact, easily-displayed images that don’t take up too much room and don’t require computationa pre-processing. They are “lossy,” gamma-corrected, and simply “clip off” values outside the displayable dynamic range. Whereas the other formats are specifically designed to be numerically expressive and loss-free. (Size is not a concern.)

The confusion with panoramic arises from the fact that for lighting 3d models placed on a real scene realistically it helps if you got the light information from all directions on the scene, an HDRI panorama. To make it clear, you need both direction, color and intensity information.

If you got an HDR image that isn’t a full spherical (360°x180°, or 4π steradians) panorama, then you don’t got the whole direction information. And if you got a full spherical panorama in but LDR format, the intensity information is significantly degraded; if it is a photo that has always been LDR (or if it was converted from HDR discarding intensity values outside the intended range), then the extremes are capped (darker areas are flat black and brighter areas are flat white); if it is converted from HDR, it will have low contrast (the intensity information is compressed, bigger changes in intensity are made smaller, and smaller ones are rounded) colors might look grayer; or if a “tone mapping” techinique is used, the color information is better preserved, but the intensity information is severely scrambled, with some darker areas looking as bright as some originally much brighter areas.