I think what is meant is that the blur effect does not discriminate well enough to produce good results on widely depth-separated objects that overlap unless the effect is done on separate Render Layers.
Example: Two spheres, one in near FG and one in far BG, their shapes overlapping, focus on the nearer sphere, and a large DOF blur is desired. Reading the ZDepth as essentially a 2D grayscale image (number of grayscale steps not determined, but that’s the basic principle), a blur is applied to the BG portion of the imaged base on the grayscale value at that pixel point. The blur affects adjacent pixels as well, in imitation of a circle of confusion (the basis of true optical blur and DOF effects).
Unless some corrective factor is applied, at the points where the two objects overlap, the blur on the BG pixels will create fuzziness that extends into the adjacent pixels of the FG image, where there should be no blur. The sharp edge of the FG object where it overlaps the BG object will be affected. Applying the effect to the objects when separated using Render Layers should prevent this but you have the added step of compositing the Render Layers back together.
Another theoretical corrective approach would be to use the ZDepth data to determine which pixels in the FG object to “reinstate” to un-blurred status, but that’s far beyond the nodes’ capabilities, layering would be easier.
I ran into this problem years ago while trying to write a synthetic DOF post-render filter for another 3D app, but never could find a good solution.