Z-depth all wihite?

Hiya. I’m hekping out a friend to do some shots that he’s going to finish in After Effects, so he wants a few passes, incliding z-depth for DOF. I’ve set up a camera with an empty controlling the DOF distance.

The problem is this. When we rendered a single image of the subject close up to the camera, the z-depth pass did have some info in it. However, if the subject isn’t right up to the camera, the pass comes out all white (which means all blurred)! The DOF empty is positioned correctly (at the depth of the subject). I’m using an empty to conrtol Focal Length too, if that’s got anything to do with it.

We’re using Blender 2.57.

What are we doing wrong?

I’ve looked through the other posts suggested, and there is some helpful information. However, we’ve found that moving the camera doesn’t seem to make any difference to the produced z-depth pass. It’s like all the camera settings aren’t being considered in the z calculations.

The DOF is set to be controlled by an empty, while the focal length is driven by the position of another empty.

The “Z” value that comes out of the render has nothing to do with the distance between the DOF empty and the camera (or the “distance” value, if you go that route). Those values are only used for the “Defocus” node in the compositor (that’s how I understand it, at any rate – you could probably write a script that somehow maps the distance to a Map Value node, but I wouldn’t know how to do that). The “Z” is a 32-bit greyscale image that spans your camera’s near- and far-clip values. When you’re actually seeing something up close, it’s because you’re seeing the first 256 measurements of a MUCH longer string of numbers.

Look again at the links above – you can run your Z through a “normalize” node to get it into an 8-bit-style image (where everything is squeezed into 256 greyscale steps), or use the “Map value” node – the differences between the two are explained in the link Richard provided.