Creating a Z-based B&W depth map

I’m trying to produce a B&W depth map of a 3D object, the sort of image one would use for an autostereogram or controlling the depth of an automated carving tool. In other words, I want the place(s) on the object having maximum Z coordinate to map to pure white, and those with minimum Z to map to pure black, with linear variation in between.

I’m real close, but not quite there, and I wonder if any kind person here can help me out. Here is what I am doing:

Set to Orthographic projection, Cycles rendering
Place the camera directly above the object
For the material, set:
Surface - Emission
Color - RGB
Strength - Separate X,Y,Z - choose Z
Vector - Text coordinate generated (I have no idea why this works, but it is the only one that does)

For Texture, choose Blend.

This appears to provide ALMOST what I want. The only problem is that the contrast is insufficient.
Instead of ranging from black to white, it ranges from dark gray to light gray.

Does anyone have a suggestion for how to do better? Thank you!

Tim

I’d rather use the Input > Camera Data node and View Distance output, pipe this into Emission shader and you get actual distance. To scale it between black and white use math nodes.

You can’t afaik probe minimum and maximum z coordinates inside shader, you’d have to use some kind of script for calculating this automatically beforehand. If your object is static, easiest way is to just scale the depth image by eye.

Kesonmis - Thank you for that. It prompted me to explore the node method, which I never tried before because I thought it was too intimidating. But after a few hours of reading the manual and playing around, I think I got it working. I do have two quick questions for you or some other kind person here.

First, I could not get your exact method to work, probably because in my ignorance I forgot some subtle thing. If I click the ‘Shades’ button I could get your ‘Input->camera data’ node and Emission shader, but there was no output node that I could use for rendering. If instead I click the ‘Compositing’ button I get the compositing output for rendering, and the viewer to see the work in progress, but the ‘Camera data’ and ‘Emission’ nodes vanish and do not appear in the ‘Add’ menu! So I can’t get all of what I need, just one or the other.

But after a lot of experimenting, I think I found the answer. It appears that the ‘Depth’ output of the ‘Render Layers’ node is the Z value. If this is really the case (and it appears to be), how simple! I just send that to a ‘Vector/map value’ node and then an ‘Invert’ node before sending it on to the compositing output. I have to play with the Offset and Size parameters in the mapping node, but if I’m careful I can get excellent contrast.

So thank you for pointing me in the right direction, and if you or anyone else sees a fatal flaw in my method, I would appreciate a heads-up. Thanks!

Tim

It sounds like you use the Material panel only. Open the material node graph, it is way easier to see what you are doing there.

Other thing is, that from first post I got the impression that you need a material that produces depth map. Now you write about compositor. You can use the Z or mist pass in compositor also, no problem, but it is a bit different approach. In this case just scale the values in compositor using any suitable math node.

I’ll put a small example here, how to construct a material that produces black-white image between two set distance values. These are just Value nodes with fancy names, use Input > Value for them or just punch in numbers into Subtract and Divide nodes:


Kesonmis - Thank you for that, especially the assurance that my alternative method is legitimate.

My problem is ignorance about much of Blender; I’m very new to it. I did something like the example you showed, except simpler. I connected the camera/view distance to simple math offsetting/scaling, then emission, then material/surface. But I must be missing a step, because every time I tried to Render, I got an error message saying that there was no output node so it could not render, and I could not find an output node to add. That’s how I ended up in compositor; it has an output node that can render. From there, it was just a matter of searching the manual and experimenting until I discovered that the Depth output of RenderLayers seemed to be just what I needed, except for offsetting/scaling and inverting. I guess the bottom line is that in order to use your method I need to find a way to be able to render the material which is so nicely created. That’s where my ignorance comes into play.