zDepth in Blender?

Hello, blender artists. So, I’ve been playing with Maya lately, and have been specifically trying out the render layers. Recently, I discovered a really powerful way to create depth-of-field: a zdepth layer.

For those who don’t know, zdepth is a render of the image which is completely grayscale, and each shade of gray corresponds to the distance the object is from the rendering camera. There’s 256 shades of gray (of course). So for example, if the object is really far away, the object would be dark, maybe with a tone of 23, whereas if it were close, it would be something light, like 235.

Using this layer, you just pop it into your beauty pass layer’s channels in photoshop (or after effects, as it were), use Filter>Blur>Lens Blur and choose the shade of the object you want to focus on, and suddenly, you have depth of field!

So what I’m wondering, is if there’s any way to do this in Blender. I haven’t stumbled across a way yet. I’m wondering if anyone knows a way to cheat it, maybe using nodes. Any ideas?


1 Like

Read up on render layers and the node compositor

double post

What you actually have is an imitation of depth of field, which in many cases is enough, but it is not a true depth of field focal effect based on optical principles. That is a much more complicated effect to accomplish, requiring substantial CPU time to simulate, which is why many “depth of field” FX in rendering situations are post-rendering imitations like the method you describe.

I’m all for faking it whenever possible to make renderings practical, but it’s good to know that the fake is just that – a fake. That way you don’t have expectations beyond the limits of the fakery – for examples, rack focus effects require a great deal more than a Z-buffer-modulated blur to work properly, and simple z-buffer blurs don’t take lens aperture into consideration.

Blender’s defocus node does take aperture as well as iris geometry into consideration.

And you can use it to do good rack focus effects, too, but it has a few issues with artifacting that have to be taken into account.

Hey, soo… what am I doing wrong then? The nodes isn’t registering Z, but I certainly have Z checked in my render options. .Blend attached, of course.


depth.blend (242 KB)

except that z depth doesnt recognise alpha mapped objects correctly. you may have to use a fog pass instead.

Okay, ignore my last post. It was a stupid question; I should have figured that the Map value node was what that was for.

So, then, I got the depth to start showing up. How, then, do I know when it’s correct? by that, I mean how do I know when all my objects are the correct color?

I’m guessing that the Size value in the Map Value node can be calculated by using the scale of your scene… but since I wouldn’t know how I just decide based on the outcome. That’s what you do as an artist… you make the work, you decide the look.

Anyway, you’re describing 2 things.

  1. Depth of Field - certain part of scene is in focus. The rest is blurred. You don’t need a Map Value node for that. Only a Defocus node

  2. Atmospheric perspective - objects further away are low contrast, desaturated and ‘filtered’ by the sky… no correct technical explanation from me (definitely not just dark). For that you use Map Value, Color Ramp and the Mix node to mix it with the beauty render

Some artifacts coming from those nodes can be solved by rendering with Full Sample

I know what atmospheric perspective is, and this topic doesn’t actually have anything to do with that. Although, I guess I could see how you would be confused from just the text.

Anyways, what I’m describing is achieving depth of field through the use of a zdepth render layer, which represents the distance of an object from a camera by a grayscale image, and is used in photoshop with the lens blur filter to create the illusion of depth of field.

For a visual example, look at this tutorial (it’s for Maya, but you should be able to figure it out from the pictures): http://cg.tutsplus.com/tutorials/autodesk-maya/achieving-realism-and-depth-using-render-layers-in-maya-day-2/

Step 5 shows a zdepth map and starting at step 56, you can see how it would be used in adobe after effects.

Atmospheric perspective isn’t relevant.

That said, You still helped me out some: I never thought to use the scale of the scene for my map value settings. I’ll try that out.

This has been well thought out in blender, for a couple years. There are lots of tutorials. You can even map a plane with a photo then distort it to get a tilt shift effect.