Does the volumetric light writes to the z-buffer?

No it won’t work because Zbuffer expect something solid. It’s logical once you think of it. Z buffer represent the distance between the camera and the pixel in 3D space. You have a wall at 10m , the pixel value will be 10.
Add a semi-transparent plane at 5m in front of the camera with the wall behind, what value the pixel will get ?
5 ? → so the plane appears fully opaque and we’ve lost the 10 value of the wall.
10 ? → same thing but we lost the distance from the plane.
we can average values :frowning: ( 5+10)/2 = 7.5 , so we miss the wall and the plane.

The solution to this is using deep image and deep compositing, that will store the distance of every transparent pixels at the cost of more processing and larger images files. Blender compositor won’t handle deep compositing (you need nuke or fusion) and probably deep images are not supported in blender even for rendering, you have to check it…

The other way to solve it, would be to output a volumetric pass : it’s just the volume with all the mesh in the scene set to black, or using a holdout mask (if that work…) .
You can then use a MixRGB set to add for the first case, or alpha over for the second and that should work.

That’s the idea, but probably there is a better way to do it now, maybe you can output the volumetric pass right from the renderlayer without any-other things to do…