How to convert normal pass to displacement, with nodes?

What I’m trying to achieve is a displacement pass that isn’t too intense for objects in the foreground; A normal pass does this 'cause it uses tangent space, but it’s useless unless you work with something that can interpret normals and all I can use is gray scale maps.

Is there a way to convert a normal pass to a shallow displacement using nodes? I am sure you can do something by separating RGB, then math nodes, then re-combining that but I can’t seem to make it work in a coherent way.

Or is there a special node or pass that I am overlooking?

You mean like linking the strength of a displacement node to the distance of the camera?

Show what you have, share some screenshots so that the issues you are describing are tangible.

So, this is the Zpass, you all know what it looks like:

Next, your regular garden variety Normal pass:

And the node setup with my dismal attempt at trying to solve this:

Edit: The goal is to make each objet its own zpass object. Like I said, the normal pass does this, but it’s so… colorful…

Alright, due to the lack of responses I assume it’s either impossible or no one will ever use this, but, if you are one of the very few weirdos that might need a solution for this problem, I’ve found that a tweaked fresnel pass could do the trick. It’s not perfect, it won’t work with certain geometries, such as cubes, which is what I needed it for, but it’ll work for round-ish or spherical topologies. I tried every single node setup I could on this and nothing worked, as far as I know, you cannot convert normals back to height. It’s a one-way street.

If you find a solution, I would like to hear it. I solved the problem with fresnel, but I’m still curious.

Still not 100% sure what you are trying to achieve but z-pass objects per object can be done with the new AOV-output in Blender 2.82. Well, you could do it before but now it is more convenient.

In the following setup the distance from camera to object is measured. Based on this distance a brigtness is assigned to the whole object.
After that the z-Depth is multiplied onto the object.
All of this is output into an AOV-Ouput so you get a separate pass later on and can render you object with your regular shader.
In the example the emission shader uses the same input as the AOV-output for visualisation reasons but you could also make the emission shader something else.

Anyway, in the viewport you can see three different cubes with three different depth passes. As you can see by reference to the orange dots, the objects with closer origins have darker depth gradients.

I

Here is one with map range nodes instad of math nodes which might be a bit more convenient.

Oh wow! I had no idea this node even existed!