Getting the depth of every pixel to the center of projection of the camera in Blender

I am trying to get the depth of every pixel to the center of the projection of the camera in blender. I am trying to get the actual depth and not a normalized value. Hence have I tried the following in the composite window:

have placed the camera at a distance of 6 meters from the cube as in below images.


Now when I render my image and the .exr file gets stored all the values in it show the value as 5.0. (I have used Python/open cv to cross-check this. Also, I have verified this in the rendered image by right-clicking and checking the z value) In my understanding, the depth of every pixel should be different. For example, if I try to check for the value at one of the pixel corresponding to the vertex of the cube then the value of that pixel should be different from the value at the center of the cube.

The requirement is somewhat similar to what the Camera Data Node provides with the view z- depth option (https://docs.blender.org/manual/en/dev/render/shader_nodes/input/camera_data.html?highlight=camera%20data%20node).

However, I don’t think that would generalize well to a whole scene containing many different objects. Is there a way to achieve the above? Can someone guide me with it?

Z-Depth data is more like a perpendicular plane from camera location, should only be used for defocus tools in compositing softwares. What you’re looking for is deep data and Blender don’t support it.

@lucas.coutin Ohh, I thought this would be possible (obviously I don’t know how ) as it was somewhat possible for an object with some material using the Camera Data Node.

Renderman 24 has support for Blender and can render deep data. You could experiment with it non-commercial version for free. Yet I consider advanced compositing.

Okay I will try that ! Thank you !!