How to transform world space vertex to camera space position in Cycles OSL?

I used blender as CG generation tool for Computer Vision research. I tried to render depth (z) value of each surface instead of its color. So I make simple shader, like this

shader depth_shader(
    vector Position = vector(1,0,0),
    output closure color Color = emission())
{
    vector lp = transform("world","camera",Position);
    Color= color(lp[2], lp[2], lp[2]) * emission();    
}

“Position” is input from Geometry node. But when I tried this, It returns odd results, and some pixel has negative value. How to get camera space position, which only applied by rotation, scale and translation? (Before projection matrix multiplication)

I would try with something like this. The m parameter is the expected max depth, the output I plug into a color ramp and into the emission shader. P is the same as geometry position.
Not really sure about camera position but this seems to work well. Should be a way to get it directly I think.

shader z_depth(
    float m = 10,
    output float Fac=0)
{
    point Camera;
    Camera = transform("camera","world",point(0,0,0));
    Fac = distance(P, Camera)/m;
}