# Depth pass mathematician?

Hey everyone,

I actually have a quick question regarding depth pass in general.

So, I know what a depth pass is, basically a black and white image showing you how far an object is from your camera.

My question is : how many “steps” can you actually have in between your pure black and pure white, and what does it depend on ?

This is pure curiosity, but I like to understand things. So I believe it depends on the color space you’re in ? Maybe also the bit depth of the image you’re rendering ? How could we “calculate” this ? Can we have an impact on it ? Like specificaly ask any software to use only let’s say a grayscale from 1 to 10, etc…

I believe it’s also now used on the new Iphone 15 cameras thanks to Lidar technologies (I guess)?

Anyway, if anyone has good ressources, or patience to explain this to me I would really appreciate it !

Thanks

1 Like

No, depth pass has no color transform, it is stored raw.

Yes, in order to get the most out of it it is recommended to render the depth pass as 16 or even 32bit Exr so that you’ll get the most amount of data precision out of it.
You can then color correct this pass without anything clipping to pure white or pure black.

1 Like

The mist pass is black to white. The depth pass just stores the actual depth in every pixel—geometry 0.516 units away? Pixel value of 0.516. Geometry 27.2 units away? Pixel value of 27.2. It’s just the values themselves, directly loaded into the file. That’s why Mist has a start and end, while Z doesn’t: Mist needs to know what depth to map to white, past which everything else is still just white, while Z is unbounded: just keeps going as long as there’s depth to store.

Now, I say that, but that last part isn’t quite accurate: there is a bound. Two, actually. 3D apps store geometry, transformations, point locations, etc, as floating-point numbers, which you can think of as basically scientific notation, e.g. 6.022 x 10²³. Each number in memory is still only allocated their certain number of bits, but since the decimal point can move around, you get higher precision at small values (more spaces allowed after the decimal point), but can also store extremely large values (more spaces before the decimal), in a way that you couldn’t do if you had a hard cap of, say, three spaces before the decimal and three after.

Note also, though, that this means numbers get less precise the higher you go. Imagine, say, 5 digits: numbers between 0 and 1 would get five decimal places, (.00345) but numbers between 1000 and 9999 would only get one (1000.1). There’d be no way to store the difference between 1000.12 and 1000.18. And there’d be no way to store numbers past 99,999 at all. So you do get quantization there. In real life, of course, we generally use a lot more than 5 places for our floating-point numbers (and they store bits, not digits), so their upper limits are more like 2,147,483,647. So, there are limits, but ones you’re unlikely to encounter, and there is quantization, but generally only enough to become a problem as numbers get very large. This does mean 3D artists working on very large scenes or animation that covers a huge distance do occasionally need to be aware of it, since objects very far from the world origin will start to develop glitches as their point locations average together: can’t store the difference between one million point 0003 and one million point 0004. But for general purposes, you don’t really need to worry about it.

So that’s quantization source number one. There’s stepping in the depth values themselves, but very very small for anything anywhere near camera, and there’s a maximum value, but a ridiculously high one. But that’s just the process of the 3D program calculating the values, the next step is saving them into the rendered file. And that’s quantization source number two: image formats. An 8-bit jpeg can only store 256 values, none of them higher than white, (because an 8-bit integer, which is how each value is saved, can represent only numbers between 0 [00000000] and 255 [11111111] ) so if you save your depth pass directly to a jpeg you’d get 256 steps across the range between the camera and 1 unit away, and then nothing past that. Let’s assume you’re using a proper format like EXR, though.

EXRs also store their values in floating point, so the same things apply—values are technically quantized by the decimal-point limitations of the format, and there is a cap, when the float values run out. EXRs are generally saved as half-float (16-bit floating point numbers) or full-float (32-bit floating point numbers). 16-bit is more than enough for colors (RGB data), but for precision and accuracy, data passes like depth or position are usually saved in 32-bit. So again, the limitations of 32-bit float numbers are the ones that apply: banding exists, but only enough to be significant at absolutely massive values, and the upper limit is very very high.

3 Likes

This is exactly the mathematic/scientific answer i was hoping for

So I understand that the mist pass is actually just a visual interpretation of the depth pass ? In which you can manually set the distance you want to be the furthest to full white, and the software just “logically set” the rest of the grayscale to the right place ?

The existence of both passes is due to the fact that one is really heavy (depth, EXR, 32 bits) while the other could just be an 8 bit jpeg with 256 scales of gray ? Or is the mist pass also only effective in EXR 32 bits ?

It’s mindblowing how little you know of a certain subject until you go into full details… Ahaha

The main difference between depth and mist is that mist is anti-aliased and rendered with multiple samples: things like defocused edges or motion blur will appear in the mist pass, while depth has crunchy, single-sample edges. Depth is meant to represent a mathematical quality of the scene, ready to be used later for things like point clouds or composited DOF, while mist, with its softer edges, lines up better with the rendered image, and is meant to be used for things like atmosphere effects.

1 Like

Oh ok, I get it, thanks again for your time !