Does it make sense to use HDR-Images for diffuse maps ?

Hello,
I’m a beginner and still very inexperienced in creating 3D-Artwork.
I want create my own texture library and think about buying a camera with HDR feature.
I have never used this before know nothing about photography.

Does it make sense to use HDR-Images for diffuse maps ?

I expect a better contrast and a finer result - or I’m wrong ?
Do you use HDR - Images for material definitions ?
I guess that memory usage and render time will be increase exorbitant.

No, it doesn’t make sense. Digital color is a ratio of 3 lights, rather than a range. A larger dynamic range for intensity (as in HDR lighting) makes sense because light intensity exists beyond the 0-1 range. If you think of color in terms of “range” then you’re talking about ultra violet and infrared.

Bit-depth, on the other hand, can factor in for color. This deals with how many data points you have to work with, within the 0-1 range. However, anything over 16bit would likely be overkill in most cases (for color).

It is certainly a valid question, but the answer is for texture maps, LDR is enough.

An HDR image (in the CG world) is an image where the maximum brightness can go above 1. In photography, an HDR image is actually something different, and definitely not something you want to use when creating texture maps.

HDR features on cameras actually give you an LDR image, but with more detail squished into it. You should not change your decision about what camera to buy based on whether or not it has an HDR mode, this isn’t what you want. This processing makes it unreliable for texture maps because a bright area of the image will be made darker, giving you incorrect results. You should stick to the default settings of the camera, or if you know what to choose, settings which reduce all processing to a minimum. Lowest contrast, no sharpening, constant white balance, and manual shutter speed, aperture and ISO.

HDR images for Computer Graphics are actually created on the computer, from multiple regular images. These images are very helpful for lighting an environment, but there is very little need for HDR maps. Diffuse maps, for example, should always be between 0 and 1, because an object with a diffuse colour greater than 1 is actually generating light. If you want to stay in the realm of reality, all diffuse maps should be between 0 and 1, usually much less than 1.

Good luck with your image capture! :slight_smile:

I would like to politely disagree and say that we’re already into the age where 24-bit colour isn’t enough, depending on the texture, anything that has subtle gradients of colour will have banding, and banding is a horrible artifact to behold. These days, as an example, I have door textures that are 1024x2048 (so roughly 7ft-8ft tall, which is 128 units in the engine I’m using) which equals 16 texels per unit, and that is considered a higher-res (more tightly texelated) texture, being that doors are something we can guarantee the player will have their face right up against. If it weren’t for the complex material, I would consider it very outdated already. If we combine the ubiquity of 4K displays and the insane amount of texture memory modern GPUs pack, combined with how engines can intelligently handle dispersal of texture memory cost, then a 2Kx2K texture, even as part of a complex PBR material, would equal the “64x64 crate” standard by now. By that logic, our doors really should be more like 4096x8192 tall, to stay consistent with texture detail. Large “terrain tiles” should be 16K-32K, and so in production, resampled down from original 65K+. That’s really overwhelming to an old dog like me, but, that’s where it’s all going. Actually texture sizes have been stunted a few times over the last few generations, so I predict it’s time for that to increase significantly. So, if you consider 24bit as being just 3X 8bit channels, it’s not surprising you’ll get banding on gradients, it’s only 256 unique pixels from any one of those 3 colours.

So, my advice would be the same for anyone who wants to get serious in any field, especially texturing - you definitely want to future-proof.

Bit-depth is not the same thing as dynamic range. Bit-depth represents the number of steps. Range relates to how high those steps can go.

Yes, higher bit-depth is sometimes needed/wanted for a texture, but that texture should still have a 0-1 (LDR) range.

2 Likes

I was under the impression that only HDR cameras will give you 12-14bit-per-channel, and most everything else is still 8bit-per-channel. At least as far as I can see. Since the question of using higher light/exposure ranges for actual deployment doesn’t make a lot of sense, I figured it was about actually capturing/processing the content, rather than purely deploying it - unless the OP expects the software to automagically adjust the ranges before rendering it. It’s the software/engine that will determine HDR render, not the image, so what really matters is bit-depth of colour. As far as I know, with digital cameras, HDR is all about exposure, and higher colour bit-depths can only be achieved that way? I’d be interested to know otherwise.
The main point I was making is about going for HDR to avoid banding with better dithering on gradients, by getting 14bit-per-channel output, by going from those higher bit-depths when sampling back down to a 24-bit image for deployment, or in the case where a 16bit-channel image is needed for something like a special effect like a light flare or other environmental particle, but I figure those are usually generated rather than photographed.

Shooting and archiving images/textures at a higher bit-depth is definitaly a good point and a smart idea. You are absolutely correct there. :slight_smile:

My only concern was that you started off by disagreeing with factually accurate information given by Smilebags and I, regarding bit-depth vs range and high vs low dynamic range. In the already extremely broad and often confusing subject of color, let’s aim for precision of understanding. :slight_smile:

So the facts as we’ve discussed:

  • Bit depth ≠ bit range.
  • An HDR image can have a low bit-depth.
  • An LDR image can have a high bit-depth.
  • HDR images should not be used for color data.
  • Higher bit-depth images are useful for step resolution, which can eliminate color banding.

I definitely misunderstood the OP, assuming it was about colour-depth and “contrast” in relation to the fine tuning of a final image (at 24bit) to deploy - in which case HDR is what I would recommend, that’s what I was disagreeing with.

When you say “An HDR image can have a low bit-depth/An LDR image can have a high bit-depth” in terms of digital cameras, do these actually exist? HDR at 8bit-per-channel sounds pretty awful to me, and LDR with anything higher seems very rare/specialised.

On a side note - HDR will definitely be a benefit for stuff like photogrammetry and other techniques for processing photographs into heightmaps, etc.

I’m no expert on camera technology beyond the few DSLRs that I’ve owned. I was speaking more theoretically to illustrate the difference between range and depth. In other words, you could have a 1bit image with 2 possible value above 1, and you could have a 24bit image with 16,777,216 possible values between 0 and 1.

If you’re inexperienced about both 3D and photography, don’t go out there and buy a good camera. Don’t invest money in it until you know you like doing it, or chances are the camera is just gonna sit in the shelves collecting dust. Get a cheap DSLR, and more importantly, get a stable and sturdy tripod. I would also get a prime lens (like 50mm f/1.8), as these tend to be very cheap and provide excellent sharpness when stopped down a few notches. I would also get a polarizer filter (check that it fits the lens), as you can get rid of some pesky reflections. Stay in shade, away from the sun.

HDR for texture work doesn’t make sense, as you’re trying to capture the potential of light reflectance rather than the actual reflectance, and this will always be less than one.

I’m not sold on the bit depth discussion. 8 bit per channel should be enough, real world doesn’t have “banding issues”. If the diffuse texture is prepared for PBR use, the typical albedo (see cheat sheet here) for dielectrics will be well below any maximum value anyway.

So what is a “HDR camera” anyway? I have one of these, and it can only deliver me a tonemapped output 8bpc - which for me in the 3D world has very little to do with my conception of HDR :smiley: Most cameras these days that can output raw format has more than 8bpc, and afaik all work on a broader EV than 8bpc can capture. High bit depth while still being on typical 8bpc range only makes sense when manipulating this data, or when creating it from an arbitrary source (like a height map).

Here is a good comparison between “old” 12-bit DSLRs (like my D80) and newer 14-bit DSLRs (focusing on the left side of the histogram to recover detail).

Thank you all very much for your answers.:grinning:
There is a lot which I have to think about it.:thinking:

Yep, if you get a camera capable of taking RAW images, even if you don’t use them now, shoot both RAW and JPEG so you have the options later. JPEG will be easier to use right away, but RAW will give you the high bit depth which was being discussed - more brightness levels between black and white means smoother textures but this really isn’t of concern at this stage.

Stick to the defaults of your camera while learning what to adjust, and use whatever you do have until you know how it is limiting you and what problem you would solve by buying a new camera.

I suppose you are all correct if HDR Images is usefull but I came to other ideas.

Pre:

  • Using the default view might not be really an option as we know. So I presume we are using Filmic. Everything gets compressed (also the textures) and the albedo values are not linear anymore (is linear the correct word?) but there is a curve. (The higher the value the more compressed are the values).

  • When I calibrate the HDRI with false color and I color pick the render (filmic view) I see that everything that is not giving light has a value between 0-1. Everything that gives light has a value above 1.

What if the one who made the HDRI, also took picture of (for example) grass (for texture) the same way as when shooting the HDRI:

  • We will have values for the texture between 0 and 1.
  • The textures have the same compression as the HDRI has, and should be the correct albedo-values. ( I know there is difference between diffuse and emission, but i have to doublecheck what is the difference between emission values under 1 and albedo values for diffuse).

If you did that, then you’d effectively have lighting baked into those textures. You’d then have a mess of a time dealing with exclusions/inclusions for additional lights and non “HDR textures”.

Don’t nearly all textures we can find on the internet have lighting baked into it?
I mean if I take a picture with my simple phone, there are shadows, reflections etc in it.
Or maybe I better ask what the best way is to make textures from pictures. Like in a lightbox where diffuse light is coming from all directions? Is that what you mean?

Yes, in that sense, pictures do have lighting baked in.

I was referring to this:

You wouldn’t want to just slap an HDR grass texture into your scene, as the secondary bounce (emission) of light would be baked into the texture (as apposed to just the potential of light). You would want to extract the base color (or albedo depending on your pipeline) from the snapshots you take or from images found online. I use Substance B2M for this. There’s also a free tool out there. It’s called Materialize if I recall.

Thanks for explanation, I understand now what you mean with light baked into texture.
Found the software materialize as well: http://boundingboxsoftware.com/materialize/
I will check that out coming week.

Even if I shot “light fixture textures”, I would rely on multiple exposures to create HDR image from, rather than “HDR” from camera which is likely to capture more than LDR for sure, but maybe not the full range you want.

Short answer:
No, don’t use the camera’s HDR feature. Use RAW instead and convert to 16bit/channel TIFF.

Long answer:
What camera vendors often call “HDR” is what we call “tone mapping”. It is in fact the exact opposite of what we call HDR - they’re compressing a high dynamic range into a low dynamic range file.

Now, there are some cameras that are actually capable of capturing a higher dynamic range than others - in fact, almost every camera is capturing more than can fit into a JPG file. To get to that data, RAW is the feature you want.

For diffuse textures, there’s one rule that still applies: surface albedo must never exceed 1.0 (= pure white), otherwise you’re throwing physics out the window. If you want to stay in reality, most surface albedos are much lower than 1.0 anyway.

I‘m a little bit confused and I don‘t know if I understand the things right.

If I use the camera‘s HDR-feature the camera takes a set of pictures with different lightning settings and stores the colour‘s RGB values‘ average as float values into the raw image file. Or it stores for each pixel 24bit for the RGB value and 8bit for an exponent.

Or doesn‘t the camera store the result in this way but uses tone mapping and stores the data with losses in a jpeg file ?

I‘m not sure if I understand the meaning of bit depth and bit range.

8 Bit have a bit range of 255. 16 Bit have a bit range of 16536.

bit depth means bit per channel.

If I understand you right then if I use HDR-images I get values above the range which therefore can not be displayed without tonemapping (with losses) and I have a lower range for color information. If I don‘t use HDR I have a greater range for color information and possible a better quality.

If I consider the diffuse map as the model’s color information sparing HDR seems to be better.

Otherwise the diffuse map is the source for normal maps, bump maps etc.

Is a HDR-Image a better source for normal and bump maps ?

And is a HDR-Image a better source for normal and bump maps as RAW-Image which is converted into a 16bit TIFF-Image ?