I used blender as CG generation tool for Computer Vision research. I tried to render depth (z) value of each surface instead of its color. So I make simple shader, like this
shader depth_shader(
vector Position = vector(1,0,0),
output closure color Color = emission())
{
vector lp = transform("world","camera",Position);
Color= color(lp[2], lp[2], lp[2]) * emission();
}
“Position” is input from Geometry node. But when I tried this, It returns odd results, and some pixel has negative value. How to get camera space position, which only applied by rotation, scale and translation? (Before projection matrix multiplication)