Why is the depth map so "low res"?

I’ve recently started using the depth map for compositing in third party programs, like Nuke. With it, you can do stuff like depth of field in compositing, rather then baking it into your render. But I’ve ran into some problems.

For some reason, the depth map looses a lot of details.
This is the normal image (zoomed in quite a bit):

This the normilized depth map:

And this the mist pass:

Both the mist pass and depth pass have their pros and cons. The mist pass is a lot better with smaller details, but some areas seem to be quite noisy.

I’ve heard somewhere in a video, that the depth pass should not be anti aliased, but how does the depth of field then work, if it does not catch all the fine details?

Why are you normalizing it? Both of those passes use 32 bit data, by normalizing, you’re compressing it halfway to hell and back

Otherwise I can’t see what information is in it. It looks the same, when I use an exposure node. Of course when exporting it I’m keeping it as it is.

I wish I had a better answer to the title of this thread, but I can at least provide the guidance the manual gives about this topic:

The Z pass only uses one sample. When depth values need to be blended in case of motion blur or Depth of Field, use the mist pass.

Manual page for your reference:
https://docs.blender.org/manual/en/4.1/render/layers/passes.html#cycles

If I come across a better answer, I’ll let you!

I agree with @joseph , you shouldn’t normalize depth. It’s data not color. In Nuke you can reduce exposure and/or gamma viewer if you need to see it, the node will change it values. When you normalize you are creating a huge contrast, creating those ‘broken’ edges.

1 Like

same thing in Nuke:

Well post-processed DOF can work well in some cases but it can go wrong in many ways. But basically depending on the DOF filter it tries to compensate for aliasing artifacts. That’s something that can be fixed manually too if you really care.

Now why the Z pass should stay aliased ?
It’s easy to understand, Z pass is for each pixels the distance to the camera. Now if a pixel is at 20 meters next to a pixel that is at 10m, both would get averaged at 15m, which in some case could work, but when you need to precisely isolate parts according to the depth this will get in the way.

But in any cases you always end up tweaking stuff in order to make things work, compositing in general is a lot about doing dirty hacks to polish the pixels compared to 3D which tries to play by the rules for the most part. At least this is how I see it :smiley:

1 Like