Why Defocus?!

Hi,

I was wondering what advantage does the defocus node have over the old-skool way of making DoF using a Zdepth pass, a blur node and a color ramp node? I mean I do understand that it is sometimes necessary if you have a real footage and you’re trying to match the settings of your virtual camera to the real camera.But I have seen it being used in some pure cg shots, like the one in Andy’s Creature Factory. And for the life of me I just can’t think of one reason that justifies the use of such a slow method to produce DoF in this case :confused:.

Perhaps the “old-skool” method needs some more exposure? I’d like to have other tools for DOF emulations, particularly since the Defocus node has some problematic limitations in terms of how it treats areas where objects overlap in the viewspace.

I’m not familiar with the method you mention – is there a tutorial available?

Well it’s quite simple. You just need to blur your image and use the ZDepth pass to control the strength of the blur effect. Of course you need to pass your ZDepth pass first through an RGB Curve node to control which areas of the image are in/out of focus (a color ramp will do though if you only need the transition between in/out of focus areas to be linear). You can also pass it through a Time node if you want to animate the effect. This information is presented in more detail in this tutorial.

I don’t know whether this method can solve the problem with overlapping objects since there might be problems with the way blender produces the ZDepth pass when overlapping objects are involved, in which case the problem will persist even if you use this method. I haven’t checked really :spin:

Anyone there??

Hi, Husam. Thanks for the tutorial, for me this method is better and more understood than defocus node :).

yeah it is still a 2D effect over a 3D image. Sothe Blur component doesn’t have any seperation, unless you construct the scene in layers.

Can you please elaborate more?

I think what is meant is that the blur effect does not discriminate well enough to produce good results on widely depth-separated objects that overlap unless the effect is done on separate Render Layers.

Example: Two spheres, one in near FG and one in far BG, their shapes overlapping, focus on the nearer sphere, and a large DOF blur is desired. Reading the ZDepth as essentially a 2D grayscale image (number of grayscale steps not determined, but that’s the basic principle), a blur is applied to the BG portion of the imaged base on the grayscale value at that pixel point. The blur affects adjacent pixels as well, in imitation of a circle of confusion (the basis of true optical blur and DOF effects).

Unless some corrective factor is applied, at the points where the two objects overlap, the blur on the BG pixels will create fuzziness that extends into the adjacent pixels of the FG image, where there should be no blur. The sharp edge of the FG object where it overlaps the BG object will be affected. Applying the effect to the objects when separated using Render Layers should prevent this but you have the added step of compositing the Render Layers back together.

Another theoretical corrective approach would be to use the ZDepth data to determine which pixels in the FG object to “reinstate” to un-blurred status, but that’s far beyond the nodes’ capabilities, layering would be easier.

I ran into this problem years ago while trying to write a synthetic DOF post-render filter for another 3D app, but never could find a good solution.

Thanks for the help chipmasque and 3pointEdit. That question bugged me like hell, though apparently its answer was right under my nose. Now I can die a happy man :D. I still think though that I should use fake DoF if I can get away with it, as I think it’s faster and provides more options for artistic considerations.

I think the only advantave the defocus node has is that the focal point can be dynamic

That’s not quite true. You can still animate the focal point in a fake DoF effect by using a Time node.

But it’s automatic with a defocus node

I think it’s mostly down to simplifying things for the poor end user. Your method already uses two more nodes than the defocus method. Add an extra slider to the ColourRamp (to make an area of “close up” blur as well: black => white => white => black) and then you basically have reinvented the defocus node. However, adjusting those ramps and levels is not as user friendly as having them work with the focal limits of the camera where you can visually pinpoint where the focus happens in 3D space, giving the feeling of easier control.