Is it possible to Mask something using DOF w/Nodes?

Is it possible to Mask something using DOF w/Nodes?

I’m trying something with DOF, but don’t like the results I’m getting.
(It’s OKAY I guess, but I wanna tweak it more)

Is it possible to Select/mask the area w/DOF?

I’m using the Defocus filter and want to only select/mask that part of the image.

(I really hope this is not a dumb question)

You can use the Z-depth pass for masking, DOF (or Defocus) uses Z-depth as an input, I asume this is what you are reffering to.

If you’re talking about creating a mask to give a image or video a ‘fake’ DOF effect then yes you can, roto the areas of the image you want to throw into the distance using splines, creating individual solids, the masks, with the image in the background of an camera, then there are two approaches, either add materials to the masks in various shades of grey simulating the appearance of a z depth pass and use that, you can tweak the greys and see the results in the comp.

Or position the solids in 3D space over the image on a plane in the trial and error. Then render the z depth pass and use that in the comp.

Make sure you blur the edges of each mask, feathered edge.

Or if you really feel really upto it, paint a ‘fake’ z depth pass in various shades of grey and use that in the comp. :slight_smile:

Should be able to find an AE tutorial on this.

You dont need to make a fake Z-depth pass, Blender can render a real one :stuck_out_tongue:
I asumed you were talking about Blenders compositor not After Effects, I’m not familiar with your solids and masking aproach but it sounds labour intensive.

Blenders z-depth pass can simply be used as a mask, no need for roto, solids or any other elaborate workaround.

You can use the 3D camera to target a specific point for focus too. Assuming you are trying to blur a blender originated image.

That’s true blender can create a z depth and that’s what I described along with two methods to fake it, but my post specifically said if it was to comp over an image or video where no 3D elements, no 3D scene exists.

How do you get a z depth pass from rendering an image through a image node import or video in the VSE, you can’t, how do you determine how you want to split the foreground, mid ground and background in the image or video, where you want to set focus, what’s it called, the hyperfocal distance?

You have to create some geometry in 3D space to suit the image contents ie Roto or fake it another way. :slight_smile:

The OP wasn’t clear whether he/she was talking about a 3D scene or masking a 2D image / video.

You describe a method to create fake z-depth. I was talking about a rendered 3D scene not an image or video from a camera. Even if you use the z-depth generated by Blender from your solids aproach its still a work around and not the same method I was talking about.

Where does it say that in your post? Youre talking about “a image or video”, where is it said that it excludes 3D elements?

You cant, and I never claimed it was possible. Youre jumping to conclusions.

In your first post you assumed the OP was talking about a 3D scene and you answered the question for a 3D scene.

So to help the OP I gave a reply based on the other scenario, that the OP may have been talking about, as it was unclear what it related to, a 2D scene where an image or video as the source.

Both scenarios are possibilities through the comp nodes, had you not considered more than one approach?, had you not considered that the OP may have been talking about 2D?, did you not consider after reading my first post that the OP may have been talking about 2D and that my answer was valid although based on an assumption like yours.

Between us we addresses some of the OP’s query, there was no need to reply like a smart ass, saying no need to fake it and we’d not be having this pointless argument!

No need for this, I made a fair point and I was polite unlike you.

fight fight fight…

Hey, here’s another idea, I used for fake “tilt shift” effects, I used a ramp/gradient as the fake Z depth.

@3point, yeah right on :_), re fake tilt shift, I’ll remember that one, but you DON’T need to fake z depth, blender does it for you, without exception. :wink:

Yeah, my bad, razzing me’s polite though is it? geez what a…

I meant wwhen using an external source (video or foto).

Assuming that you are trying to DOF on a Blender 3d image, but the default defocus filter is not doing the job the way you want, you can either pass the parts of the image through different blenderlayers, and then through separate DOF passes, and mix the results, or you could use other objects on another pass to generate a fake z pass, and merge that with the original z pass. Not as easy as it sounds, as many of the processes you can do on an image don’t work on a z pass. ( You can’t blur it, for example, as this makes no sense logically)

If you have a problem where, for example, a reflection shows sharp, when the object reflected is at a distance where it should be blurred, then you can use a separate pass to separate the reflective object, and then put it’s reflection pass only through a separate DOF pass where you use an add node to tweak the z buffer data. This allows you to make the reflected objects seem further away than they are. You then need to add the reflection pass back into the reflecting object, and z-combine the whole thing back into the scene.

If you need to tweak the reflected/refracted objects independently, then there is no option but to build a set of scenes, each containing the reflecting/refracting object and some of the objects reflected/refracted, and then merging all the parts back together.

Unfortunately, Blender does not provide a reflected/refracted z-pass, alpha pass, or index pass, all of which would be very useful, as would the ability to make an object completely invisible to a layer, by stopping it from affecting the shadow, reflection, and refraction passes on layers where it does not appear. At the moment, an object will still be seen in reflection, even when it is on a hidden layer… Weird!

Another option would be if Blender could do a ‘reflection to mesh’ conversion, to create a pseudo-mesh representing the objects seen. But I digress


hmm, some really good points there Travellingmatt!