my problem has a funny name…
I positioned a few planes in space and set the focus point on one of them. The others should be blurry. I think I did everything as told in several tuts: Node with a defocus, Dof Dist. in the camera edit panel.
But when I render, the edge of the front plane doesn’t get blurred properly. The overlapping part stays sharp. (see picture)
Is it just like this? Or can I do something to make this look more realistic?
AFAIK it’s just like that when using the Defocus node, artifacts like that can crop up when objects in the scene are widely separated.
A possible workaround is to to use Render Layers to split the scene up into “areas of equal focal blur,” so that each Render Layer area is Defocussed separately, which should relieve the overlap artifact, and then recombine them using Alpha Over. I say “possible” because I haven’t done this to fix that kind of artifact, but have for other reasons, so it’s a little theoretical.
I too have problems with Blenders implementation of DOF. I feel like it needs another parameter like remain in focus for this distance. It certainly does not work like my SLR camera.
With that in mind I created a little stage setup with pre-blurred layer section. Maybe you can make use of it. You just put object on a layer that you want blurred and any object on that layer remains blurred at that layers blur value. The setup assumes most objects will remain at their various z-depth so you can still get artifacts when objects move from the background to the foreground, but it may be of use. At least an example of another way to do it. http://blenderartists.org/forum/showthread.php?t=168390
That’s because it isn’t true Depth of Field. It’s a post-rendering process (like most nodes) that’s intended to imitate depth of field.
Actual optical depth of field simulation is very complex to implement and would add significant time to renderings that use it. It would be good to have the option but I think Blender’s default perspective Camera implementation would also have to be significantly revised to get it to work right, since optical DoF is affected by specific lens system parameters that the Blender camera doesn’t yet simulate.
It’d be a great plug-in or script addition, though, if someone knew the math behind it and could integrate it into the existing camera capabilities. I’ve studied it a bit and got a little lost (they don’t call 'em “circles of confusion” for nothing ) in trying to figure out how it could be implemented (in another 3D app).
Thanks for your replies!
The thing is - I am doing an animation where foreground and background constantly mixes up. I guess all I can do is to reduce the max. blur factor and hope that the artefacts are not grabbing attention.
Yes, chipmasque, the camera definitely needs improvement - hopefully in 2.5!
Arghh, that stretches the envelope a bit. Is the motion within the scene depth continuous or are there cuts between? The reason I ask is that I’ve faced a similar problem that required splitting the scene into foreground, midground and background Render Layers, each Render Layer having objects treated in different ways in the Compositor (Node Editor). But I didn’t move objects from one “scene space”/Render Layer to another during a shot, just adjusted them “between takes” so to speak.
I made a little rack-focus demo for my BLenses documentation (see page two at the bottom) that uses a moving focal point, somewhat akin to your problem, but also had small artifacts that I ended up correcting by using some masking and Blur nodes to selectively fuzz out the affected areas. Nothing as severe as your example shows, though.
OK for stills and some animations, I’m sure, but if you’re trying to animate the camera DoF the settings in the Vector Blur node are helpful and its link to the DoFDist mark is essential. Manually creating an accurately changing blur as the focal point changes in Z seems to me very difficult using only a Z-channel-modulated Blur node. How do you modify the Z Factor to account for varying focal pints?