Render settings for depth, normal, albedo in 2.80+

I have a python script used for batch rendering. I used it on 2.79 and now upgraded to 2.81a in order to have some new features such as gltf/glb loading, eevee renderer, etc.
As a first step, I am trying to get the current script to work with the new version, and despite going through the documentation, can’t get it right. Here is the function from my script for setting up the renderer settings. This works on 2.79:

def init_renderer_settings(self):
        # Set up rendering of depth map.
        bpy.context.scene.use_nodes = True
        self.tree = bpy.context.scene.node_tree
        links = self.tree.links
        # Add passes for additionally dumping albedo and normals.
        bpy.context.scene.render.layers["RenderLayer"].use_pass_normal = True
        bpy.context.scene.render.layers["RenderLayer"].use_pass_color = True
        bpy.context.scene.render.image_settings.file_format = 'OPEN_EXR'
        bpy.context.scene.render.image_settings.color_depth = '32'

        # Clear default nodes
        for n in self.tree.nodes:

        # Create input render layer node.
        render_layers ='CompositorNodeRLayers')

        self.depth_file_output ="CompositorNodeOutputFile")
        self.depth_file_output.label = 'Depth Output'['Depth'], self.depth_file_output.inputs[0])

        self.normal_file_output ="CompositorNodeOutputFile")
        self.normal_file_output.label = 'Normal Output'
        #[0], normal_file_output.inputs[0])['Normal'], self.normal_file_output.inputs[0])

        self.albedo_file_output ="CompositorNodeOutputFile")
        self.albedo_file_output.label = 'Albedo Output'['Color'], self.albedo_file_output.inputs[0])

        self.scene = bpy.context.scene
        self.scene.render.resolution_x = self.config.image_size
        self.scene.render.resolution_y = self.config.image_size
        self.scene.render.resolution_percentage = 100
        self.scene.render.alpha_mode = 'TRANSPARENT' = self.scene.objects['Camera']
        cam_constraint ='TRACK_TO')
        cam_constraint.track_axis = 'TRACK_NEGATIVE_Z'
        cam_constraint.up_axis = 'UP_Y'
        b_empty = self.parent_obj_to_camera( = b_empty = self.config.cam_focal_len  #  mm focal length = self.config.cam_sensor_sz = self.config.cam_sensor_sz

        self.scene.render.image_settings.file_format = 'PNG'  # set output format to .png

        for output_node in [self.depth_file_output, self.normal_file_output, self.albedo_file_output]:
            output_node.base_path = ''

        self.depth_file_output.format.file_format = "OPEN_EXR"
        self.normal_file_output.format.file_format = "OPEN_EXR"
        self.albedo_file_output.format.file_format = "PNG"

I am currently stuck with the lines:

 bpy.context.scene.render.layers["RenderLayer"].use_pass_normal = True
 bpy.context.scene.render.layers["RenderLayer"].use_pass_color = True

and can’t find the 2.80+ version of it. Would appreciate if anyone can help with this.

In 2.8 it’s view layers instead of render layer.

bpy.context.scene.view_layers["View Layer"].use_pass_normal = True
# or
bpy.context.view_layer.use_pass_normal = True

To check the names of each property, you can enable ‘Python Tooltips’ under Interface in your preferences. Once enabled, you can hover over properties and the tooltip should show a path you can use to access the property from python.

Hi Ben,
Thanks so much for the tip. Problem is that I am not a graphics expert, I am an engineer and programmer so I am pretty lost when opening the blender interface, and couldn’t find how you configure depth map, normals map and albedo outputs through the UI.
I also saw that use_pass_color has been removed, so the question is how to get an albedo map. I found online I can define bpy.context.scene.render.bake.use_pass_color = True, but then I fail on the line:[‘Color’], self.albedo_file_output.inputs[0]) saying ‘bpy_prop_collection[key]: key “Color” not found’. I have no idea which keys are available and whether that’s the right way to get a rendered albedo map.
Any help or references to some useful material would be highly appreciated.

OK, so I fuddled around with it some more and made some progress. What I ended up with so far is

bpy.context.view_layer.use_pass_diffuse_color = True which gives results similar to the albedo on 2.79, although the colors are a bit off. If there are any suggestions as to how I can get the true albedo of the model that would be helpful. Another thing that works different is the transparency. On the rendered image I get the right alpha channel. On the albedo (or more correctly now it’s the diffuse color), I get an alpha channel but it is all 255’s, so no real transparency.
I use["Scene"].render.engine = 'CYCLES'
self.albedo_file_output ="CompositorNodeOutputFile") 
self.albedo_file_output.label = 'Albedo Output'['DiffCol'], self.albedo_file_output.inputs[0])

bpy.context.scene.render.film_transparent = True
bpy.context.scene.render.image_settings.file_format = 'PNG'  # set output format to .png
bpy.context.scene.render.image_settings.color_mode = 'RGBA'
self.albedo_file_output.format.file_format = "PNG"
self.albedo_file_output.format.color_mode = "RGBA"

It’s necessary to disable color management (Transform ‘Raw’ and Look to None) when using compositing nodes to split passes.

You’ll need to copy the alpha to the albedo pass from the image pass using a ‘set alpha’ node.

Thanks so much… That’s really helpful and I got both suggestions working. Just for reference if anyone needs this, the code to disable color management is:["Scene"].view_settings.view_transform = 'Raw'["Scene"].view_settings.look = 'None'

And to add the Alpha channel to my albedo output:

alpha_node ="CompositorNodeSetAlpha")['DiffCol'], alpha_node.inputs[0])['Alpha'], alpha_node.inputs[1])[0], self.albedo_file_output.inputs[0])

Just one more question around this: Disabling color management also affects my rendered image (as expected). Wondering if there’s a way to have it affect only the diffuse color channel, or do I have to render twice, once for the combines image and one for the diffuse color? Asking since I am batch rendering and I would like to refrain from doubling the rendering time.

If you put the image pass through a gamma node, gamma set to 0.455, you should have the correct result. Color management is handled poorly in Blender so I’m not 100% on this.

As a side note, without color management disabled, the file output node will mangle any data passes(Depth, Normals etc) so it has to stay off. If this script has been used in the past then the normal and depth passes being output were likely incorrect. Keep that in mind if comparisons seem odd.

Wow, thanks for this very useful information. The previous run I did with this code was on 2.79 using the Blender renderer and not Cycles, so I’ll make sure to make the comparison. I would have never thought that color management has anything to do with geometrical information such as normals and depth.
I’ll try to do the Gamma node (surprised that only Gamma is the difference) and if it won’t work then I’ll render twice - once for the combined image and one for the rest. It will definitely provide motivation to dig in and make it use the CUDA option.
Thanks again.

It shouldn’t. The issue is specifically with the File Output node not letting you specify if outputs have the color management settings applied, so they’re applied to everything. The proper way to get render layers out of Blender is to not use compositing nodes, but instead output a multilayer exr.

Well, did some tests and without color management the diffuse color is much darker than the albedo I was expecting. I actually get much closer to the result I got using 2.79 with color management on. The rendered image also looks dark, so the Gamma alone doesn’t do it, unless I did something wrong. The code I used:

self.image_node ="CompositorNodeGamma")['Image'], self.image_node.inputs[0])
self.image_node.inputs[1].default_value = 0.455

self.image_file_output ="CompositorNodeOutputFile")
self.image_file_output.label = 'Image Output'[0], self.image_file_output.inputs[0])
self.image_file_output.base_path = ''
self.image_file_output.format.file_format = "PNG"
self.image_file_output.format.color_mode = "RGBA"

Regarding your comment on the multilayer exr - if you can point me to the right direction that would be great. I was using nodes since this is the code I found online and started with. Just so you’ll understand the context, I am a computer vision practitioner and preparing training data for my algorithms, so I can use any format, as I write my own dataloader. If I need I can always split the multilayer exr to a few files.

If you disable the compositing nodes and instead render a multipass exr, all your render layers will be contained within a single file.
bpy.context.scene.render.image_settings.file_format = 'OPEN_EXR_MULTILAYER'
instead of
bpy.context.scene.render.image_settings.file_format = 'OPEN_EXR'

for each pass you’ve turned on,

bpy.context.scene.render.layers["RenderLayer"].use_pass_normal = True
bpy.context.scene.render.layers["RenderLayer"].use_pass_color = True

the rendered file will have a named sets of color channels in linear floating point values. You can then do whatever you want without needing the awkwardness of Blender’s compositing nodes.

The issue with images being too dark is because they need to have a gamma of 2.2 applied, aka an linear-to-sRGB conversion.

Thanks for this tip. Really helpful and also makes the code much more compact.
Was able to save the multilayer exr and parse it in python using OpenEXR.InputFile().
There are two questions that popped up now:

  1. The depth map is also antialiased, which is not really a problem for me (it’s actually nice), but it causes the very high values of the background to bleed into the Alpha map non zero region, causing very high depth values at the edges of the object. I am an image processing expert, so I can work around this, but this will result in jagged edges, as the values there are practically corrupt, so wondering if there is a way to make the depth map with zero background rather than those very large values. Or any other solution for this.
  2. The combined image and albedo are now in floating point, which makes me unsure how to scale them to a [0, 1] range or [0, 255] or in general to a colorspace. If I have something white in the image, it is easy but I am looking for a robust solution here.
    I already tried a linear to sRGB conversion using:
import numpy as np
def lin2srgb(im):
    srgb = im*0.0
    ind = im <=  0.0031308
    srgb[ind] = im[ind]*12.92
    ind = np.invert(ind)
    srgb[ind] =  1.055*(im[ind]**(1.0/2.4)) - 0.055

    return srgb

And both the combined and the diffuse color come out too dark. Seems like I am missing some conversion here. Color management is NOT turned off. As a matter of fact, I just tested it and color management seems to have no effect on the combined and diffuse color layers in the multilayer exr. Now I’m confused.

Depth map output isn’t anti-aliased though.

Yes, that’s the point. Turning off Color management was specifically to prevent the file output node inappropriately applying color transformations to your normal and depth passes. It should have no effect of a multilayer exr output.

Your conversion function looks correct and should be all you need to get the color passes to look correct on screen.

OK. I was comparing wrong images, so here are my corrected findings:

  1. I thought the depth map is antialiased since I saw that when increasing the kernel size, there was more bleeding of the high background values into the alpha map. Could be a difference in the alpha map, now that I am thinking of it.
  2. The diffuse color compared to the color pass I rendered on 2.79 with the blender renderer is very comparable, up to 1-2 color levels (on a 255 scale).
  3. The combined is much darker compared to the one generated on 2.79 with the blender renderer. I guess it’s just the renderer that’s different. I can also see the shadows are much more diffuse compared to 2.79 with blender renderer. I’ll just turn the lights higher, as the energies are really different between the versions / renderers.
    Finally, I would like to convey my deepest gratitude. You’ve been amazingly helpful. No way I would have been able to do this myself.
1 Like

Hi Tom,
i’m also a starting computer vision practitioner and would greatly appreciate it if you could share some of the code you used in the end to save the color and depth and the code used for loading it.
i imagine it would help us as a community further 3D computer vision research :slight_smile:

David Karl

Hi Dudy,
I have a feeling we have more in common than you’d suspect …
In any matter, this rendering code has been written for a commercial company I work for and not as part of an academic project or similar, so I am not at liberty to share it at the moment.
That being said, I agree this is very much needed for computer vision R&D and this is why we intend to release it as open source in the near future for the community to collaborate and enjoy. If you DM me with your email address, I will add you to the distribution list so you’ll be notified when the code is released.
Best regards,

Hi Tom,
first of all good for you for that open source mentality.
i couldn’t find where to DM you so i’ll just post it here: [email protected]

thanks :slight_smile: