The *mist* pass is black to white. The depth pass just stores the actual depth in every pixel—geometry 0.516 units away? Pixel value of 0.516. Geometry 27.2 units away? Pixel value of 27.2. It’s just the values themselves, directly loaded into the file. That’s why Mist has a start and end, while Z doesn’t: Mist needs to know what depth to map to white, past which everything else is still just white, while Z is unbounded: just keeps going as long as there’s depth to store.

Now, I say that, but that last part isn’t quite accurate: there *is* a bound. Two, actually. 3D apps store geometry, transformations, point locations, etc, as floating-point numbers, which you can think of as basically scientific notation, e.g. 6.022 x 10²³. Each number in memory is still only allocated their certain number of bits, but since the decimal point can move around, you get higher precision at small values (more spaces allowed after the decimal point), but can also store extremely large values (more spaces before the decimal), in a way that you couldn’t do if you had a hard cap of, say, three spaces before the decimal and three after.

Note also, though, that this means numbers get less precise the higher you go. Imagine, say, 5 digits: numbers between 0 and 1 would get five decimal places, (.00345) but numbers between 1000 and 9999 would only get one (1000.1). There’d be no way to store the difference between 1000.12 and 1000.18. And there’d be no way to store numbers past 99,999 at all. So you do get quantization there. In real life, of course, we generally use a lot more than 5 places for our floating-point numbers (and they store bits, not digits), so their upper limits are more like 2,147,483,647. So, there are limits, but ones you’re unlikely to encounter, and there is quantization, but generally only enough to become a problem as numbers get very large. This does mean 3D artists working on very large scenes or animation that covers a *huge* distance do occasionally need to be aware of it, since objects very far from the world origin will start to develop glitches as their point locations average together: can’t store the difference between one million point 0003 and one million point 0004. But for general purposes, you don’t really need to worry about it.

So that’s quantization source number one. There’s stepping in the depth values themselves, but very very small for anything anywhere near camera, and there’s a maximum value, but a ridiculously high one. But that’s just the process of the 3D program calculating the values, the next step is saving them into the rendered file. And that’s quantization source number two: image formats. An 8-bit jpeg can only store 256 values, none of them higher than white, (because an 8-bit integer, which is how each value is saved, can represent only numbers between 0 [00000000] and 255 [11111111] ) so if you save your depth pass directly to a jpeg you’d get 256 steps across the range between the camera and 1 unit away, and then nothing past that. Let’s assume you’re using a proper format like EXR, though.

EXRs also store their values in floating point, so the same things apply—values are technically quantized by the decimal-point limitations of the format, and there is a cap, when the float values run out. EXRs are generally saved as half-float (16-bit floating point numbers) or full-float (32-bit floating point numbers). 16-bit is more than enough for colors (RGB data), but for precision and accuracy, data passes like depth or position are usually saved in 32-bit. So again, the limitations of 32-bit float numbers are the ones that apply: banding exists, but only enough to be significant at absolutely massive values, and the upper limit is very very high.