Pass (Z-Buffer) from Blender, is that possible?

Depth Pass (Z-Buffer) from Blender, is that possible?

I am looking for a way of saving the Z-Buffer information from a render into an image, in order to use it for composing later. I know Blender can save Alfa information into the image if you choose the right format. Regarding the Z-buffer there is the option to save to IRIX+Zbuffer… but I have yet to find a program that can read that output. Does anyone knows of an usable way to get the Z information from a Blender render?

*** If you don’t understand what I’m talking about, here is a quote from a Maya tutorial speaking about the same issue.

Depth Pass - Z-depth can be rendered as a separate pass to allow for a variety of results in post. One can use Z-depth to add depth-of-field and fog to an element. It can also be used to allow a compositing package to know where one object on a layer is in relation to another object on another layer. Therefore if you had a sphere in the middle of a torus where some of the sphere is behind and some in front of the torus, you could render the objects separately and still composite them successfully.

Z-depth is a single channel image, being limited to 256 shades of gray in a standard 8-bit/channel image. It is common to render a depth pass as a 16-bit image to increase the value range and accuracy within a z-depth image.

The gray values within a depth pass represent distance from the camera where white is near and black is far. The range in Maya units that this represents is based on the rendered camera's clipping planes. Therefore you can increase accuracy by using near/far clipping planes which are as close to the objects in your scene as possible while not clipping them off. It is very important, however, to use the same clipping plane values for each rendered layer.  

and the full tutorial is here:

Yes it is possible there is a sequence plug-in for Blender that allows you to render the z-buffer.

Hope this is what you are looking for, it doesn’t save the z-buffer as an alpha channel but as an b/w image. But that shouldn’t be a problem for most compositing software.

Good luck,

:expressionless: Almost forgot:

You have to experiment with the clipstart and clipend of your camera to get the best results. A clipstart of 0.01 works well for me, any higher value like 0.1 give very odd results in the z-buffer. But i think that depends on the scene.

Happy Blendering,

Thanks a lot… greyscales are exactly what I need. They are to be used with the Gimp detph merge filter, that only requires greyscale maps.

you can also do that by rendering to Black and White with a black mist with the Quad option starting at the camera’s clip start and ending at clip end.