how to render zdepth wihtout zblur plug in ;)


the plugin does not work. i know that there is a way to render an image without any color (but grayscale). is there a recomendet way in blender since i never did this? i want tu use the zdepth image as a mask to apply depth of field in photoshop.

i see that blender has can safe into iris plus zbuffer file format but in any app i open i dont see this zbuffer channel. photoshop cannot even open the file. i am on os x. can anybody maybe tell me the trick to get the zbuffer channel into photoshop?


There´s this technique… Let me see if I can remember.

  1. Create a new shadeless material with Blend texture (black to white).
  2. Create an Empty as big as the whole scene
  3. Use the material in all objects with mapping type to object (the Empty).
  4. When you render you´ll get an image with objects futher away from the camera being black and closer, white. Save this iamge and use it as you Zbuffer.

Hope it helps.

ah cool thanks!

this sounds pretty logic to applay a gradient shader to fake distance to camera!


You can also use the camera parented to a circle method that is listed in the tutes section here as well.


realy? darn than i must have overseen it!


thats the tut you mention in think. but shame about me i dont realy understand it completly! grrr.

anybody has a link to the original website that still works?


A render zbuffer to file or channel button in the Render settings would be wonderful and quite useful.

The blend texture trick works, but if you have complex scenes, it’s not really an option.

well iris does exactly that - rendering depth information into an additional channel, like the alpha channel. the point is, which software can recognise and read that channel?

there is a photoshop plugin that can open iris images… unfortunately i lost the link. try googling or search these forums (and the ones on, as i found it some where here.

hoffe das hilft


darn i dont find any tool on os x to read iris and graphicconverter can open it but i have no clue how to access the zbuffer channel.

its time that blender copies files in psd format file so all can be put into channels!


claas - here we go. you didn’t search the forums, did ya? :wink:

hey cool i found it! but uff i get this

kinda strange zbuffer for me. i know that in one image i had one somehow useable but the depth was mixed up. in this image here 6 i just gray where i had the mixed zbuffer in the rendering befor.

i give up


hm, right, now i remember i had “funny” results, too… maybe blender doesn’t write the right format? i’d direct a question to the blenderorg forums and ask the author of the plugin if he’s aware of any limitations/problems


hey, does anyone know a way to use the z-buffer image to create a good post processing Depth of Field effect?

I used both the z-buffer plugin and arangel’s method, however i can’t create a convincing DoF effect using the z-buffer rendered image.

blender lacks tools like real dof or motion blur.
for 100% realism its hard to do it here!

I think that Blender don’t need a incredible DoF tool. If there were a good sequence plugin that uses a sequence of z-buffer rendered images with show z-buffer plugin to apply a good DoF effect on a second sequence of images, it would be just great to me.

There is a good description of how to fake depth of field in the new manual:

Hope this helps,

The old plugin does quite a good job:


you can make a fake zbuffer in alpha channel

  1. make a World with fog
  2. set the fog start/end
  3. render image and save it with alpha

I found a zblur (simulated DoF) plugin to Photoshop here:

Not a good plug-in, Blender’s zblur plug better.

GreyBeard: I already know this tecnique, however this way is very time consuming. I awasys thought that a good post production tool/effect would be much better because it is possible to render a image/sequence of images very quick witn no motion blur, then render only a z-buffer image/sequence of images even more quick and finally add motion blur and depth of field very quickly. Any post effect is faster than a rendering with all this features turned on in every 3d package I know.

That way, we could also get rid of the fact that blender only supports 16 oversamplings, witch is sometimes limiting.


motion blur is kidna tricky and as far as i understodd you motion blur with zdepth does it work anyway? somehow you just need to render more pictures to calculate motion blure and there are many high end tools who still dont realy can show motion blure with all features!