How to set up arguments of map value node and my camera settings to render a depth image in PNG

I need to render depth image for the 3D scene, using 1 camera. I don’t care about color image and the lightning. I only care the depth image. The format should be PNG.

My question is about how to set up arguments of “map value node”: size, offset with respect to camera location, camera clip_start & clip_end, and with respect to the scene location, so that I can get a correct depth image.

The center of the scene is locate at (0,0,0).

The camera locates at (20,0,6), point to the (0,0,0).

The scene range in XY-panel is both in [-10,10]. The height of the scene is in the range [0,10].

If you could help me to set up the size, offset, the clip_start, clip_end, I am very appreciate to your help. It already took me 1 day to search answer on google. Thank you.

Best,
baer

I did a much simpler version of this with a shadeless (emit) material with a gradient on it. I used an empty, and had this control the position on the gradient using Object as the option in the Texture Coordinate node.

What I wanted to do was to generate a hightmap for a displace mod, and I wanted it to both raise and lower the surfice, but I wanted to be in control of what shade of grey neither raised nor lowered.

If there are multiple objects in your scene, In Render Layers there’s a Material slot which means that you can set everything to render with the same materials.

The next stage is Colour Management: You definitely don’t want filmic - you don’t even want sRGB. You want linear. (Scene > Colour Management > Display Device = none) And even with linear, for some reason 808080 is mid grey rather than 888888 - it reads as .5 .5 .5 in the other option in the colour node.

Or you could look into the Depth node in Compositing. I think this outputs blender units (metres by default) away from the camera.

Just select Depth Pass in render passes. (z-pass)

Set to non colour. “Linear” means linear colour. None is bogus too; you always want colour management, even with data.

And stop using garbage hex codes. They are utterly meaningless and obfuscate the intention of the values. Use float ratios for clarity. Set via the RGB inputs on colour etc.

Can you explain this? If I do a math operation (or non colour data based image texture), I don’t want any kind of cm to mess with that data. What I might do is turn off cm for previewing and manipulating that data.

Non colour data flagged buffers specifically keep it out of OpenColorIO colour transforms. This is a nuanced thing that not many people think about, but makes perfect sense once you realize there are two general classes of information. Blender has had a seriously deep design flaw that leads to other bad flaws.

This very likely stems from a confusion over what “linear” means. “Linear” can be a facet of both data such as depth, normal, displacement, etc. and colour. Nonlinear can be a facet of both data and colour as well; it’s perfectly feasible that there may be a log based data encoding, just as there may be a log based colour encoding.

Interestingly, it is also very possible that work requires transforms on data as well! Imagine having a small output referred encoding of 0.0:1.0 range, encoded in a TIFF. The actual data may require -10:+10. A Depth encoding from zero to infinity, which is very difficult to encode in an output referred 0.0:1.0 encoding, could be more gracefully handled via a somewhat trivial data transform. OpenColourIO also can indeed handle such a transform, and keep it “outside” the colour transformation chains for Displays / Views for example.

The seriously deep design flaw is mostly hidden currently in Blender, but having understood the above, is quite clear. When OpenColorIO was first integrated in Blender based off of Xat’s implementation, someone made the poor decision to encode the file based off of the currently selected view. This is an absolutely awful idea, and in fact contributes to some of the insanity you see forwarded. The original intention of Xat was to have a separate file encoding set of transforms, not be based on the currently selected Display / View combination, but rather keep the viewing and the output / file encoding quite different. TL;DR: Blender screwed up and wound a degree of file encoding up into the viewing encoding.

With all of that said, file encoding in Blender should honour the “isdata” OpenColorIO flag, and encode the data according to the transform listed, which in the case of the “Non-Color Data” transform, is a linear to linear ratio no-operation. The “isdata” flag in OpenColourIO is precisely for this sort of transformation handling. If Blender mishandles this, it’s a bug, and needs to be reported.

There is another side of this worth noting, which is the visualization of the data. A good example is Alpha, where we may want to see the units of occlusion as a percentage. How do we display this? That is, most folks likely believe they are “looking” at the alpha as visualization of the linear percentage ratio. What should 50% occlusion look like? Should it output 50% “brightness” in visualization? If we view it as the “linear ratio”, alpha code value 0.5 for example, would be rolled through the hardware of the display’s particular transfer function inversely. In the case of an sRGB display, this means that your “linear ratio visualization” is actually the 0.5 data, rolled through the hardware’s output_transfer_function(0.5) which sort of happens to look correct! However, we can see from this simple example, we have several facets to consider:

  1. Data and colour encodings. Both can be nonlinear, both can be linear, both can cover arbitrary ranges. Software needs to permit the pixel pusher enough granularity to tackle this complex and nuanced situation.
  2. Visualization of the internal colour and data models need to be considered. How do we want to visualize normals? Depth? A False colour for depth is a useful visualization, as are alternative visualizations for other data sets.
  3. Consider the output device. Always. As Doug cites, having your maximum occlusion alpha value of 1.0 blaring out 4000 nits peak output from an HDR capable monitor is far from ideal, nor intention. And this doesn’t even begin to scratch the surface of actual colour representations.

More context can be read via this link, with a bit of feedback from the current team working on the V2 branch of OpenColorIO.

Troy, if you were to put together a tutorial series on colour spaces I’m sure it would be absolutely fantastic and really informative. What I do is basically poking around in the dark.

I’ve sort of started down that path with The Hitchhiker’s Guide to Digital Colour.

Around here, it’s important that the folks using the software understand what is going on under the hood enough to help out other folks. That is, the software relies on you folks having a darn firm grasp of what is happening, and understanding the contexts of the needs.

If we can’t get to that, we are hooped. Poking around in the dark is awesome. Pushing it further from “What is happening?” to “What should be happening and why?” is where we need to press onwards to.

Thankyou - I will read through it, and hopefully get better at these things.