Hair looks crappy with depth of field effect

I’m throwing together some very quick, crappy still image renders for something silly (definitely not my best work, don’t laugh). I’m really bothered by the fact that the hair on my in-focus subject looks terrible with a depth of field effect being used. I’ve tried adjusting the fstop in the defocus node, samples, etc. but when the hair starts to look ok again, the other subject isn’t blurry enough. I’ve also tried adjusting the focal length of the camera (currently 25mm for the image attached). Is their any way to exempt the hair from this effect, and if so, how would I go about doing it? Or would it be better to play around with focal length and fstops some more?

See sample image. The render node set up is a standard, basic depth of field set up (i.e. one defocus node). Since this is a still rendering, I don’t need to animate, so I am open to hacks, but would like to preserve a set up that could be used to animate later on.


I don’t see how it looks wrong - I’m not sure what you’re trying to achieve…

But for more control, you could separate the out of focus character onto another renderlayer, defocus him, and ass him back on again.

I think there was an article on blendernation a while back with some noce tips on Dof - try searching it :smiley:

Thanks. I tried using faux-depth of field (separating the two and blurring one layer). I used “Alpha Over” to add him back in, but now there’s a terrible outline around the guy. See photos. I’ve never been able to figure out how to get that method to look good either, so I’m curious how to get rid of that.

Attachments



Don’t use alpha over - use the mix node :smiley:

I tried that too, but it ends up looking like there is mist. So I tried rendering the sky separate from the characters (which are also rendered separately) but overall it always ends up looking like mist or way too bright overall.

Sorry - bad advice :stuck_out_tongue:
That’s me getting it confused with the z-combine node :wink:

Try enabling ‘convert premultiply’ , or if that doesn’t work, erode it with the dialate / erode node.
Oh - and try to use the defocus node for blurring him, just use a constant z-scale. The blur looks wierd.

sorry about that :stuck_out_tongue:

Cool, the ‘convert premultiply’ worked. If I decide to use the defocus node instead of AlphaOver with a blur, how can I figure out roughly what constant z-scale to use? I can make a wild guess but it feels like a total shot in the dark.

The z-scale option only applies to files using images as z-buffers. You’re using render layers with real depth buffers so it’s a moot point.

while there are definitely work-arounds, this does highlight a pretty longstanding limitation of blender defocus node. i’ve always had issues with artifacts at the boundaries between out-of-focus and in-focus elements. the “threshold” option is supposed to help mitigate this issue, but in practice i’ve never gotten good results.

i appreciate how hard it is to handle these boundaries, as i made my own stand-alone defocusing software some time back and there were a number of challenging issues, but they there are definitely smarter algorithms than what we have now.

makes me want to dig up my old code and see if there’s anything worth presenting to any blender devs…