How to generate Textures without Rendering to File?

I’m working on a task that involves generating a large number of textures in Blender using Python (bpy). While I can automate the creation of shader nodes, I’m struggling to extract the resulting texture images efficiently. Currently, I render the images, but this involves saving them to files, which isn’t necessary for my task and adds significant runtime overhead.

Since my textures are 2D and only depend on the shader nodes (lighting, camera angle, etc., are irrelevant), I’m wondering:

Is there a way to render an image and access the pixel values directly from a script, without saving to a file?

Alternatively, is there another method to directly sample the output of the shader nodes?

And are there any recommended settings for rendering it with lowest latency?

The simplest way is to have a plane in your scene, and bake it with the bpy.ops.object.bake operator.
You could plug the texture directly to the Material Output node, and bake ‘Emit’, which should be rather fast, and doesn’t need to execute closure operations (as it does in a normal rendering).

Thought it’s possible to go even deeper and use the baking API from blender, it’s far more complicated, as you need to deal with buffers, object IDs, primitive IDs, UVs, derivatives, etc.

That sounds very interesting, can you elaborate how can I do that? From one attempt with baking it seemed that I have to use the cycles engine and that was much much slower then a full render.
This is the code I have:

bpy.ops.mesh.primitive_plane_add(size=2, enter_editmode=False, align='WORLD', location=(0, 0, 0), scale=(1, 1, 1))
material = bpy.data.materials.new(name='my_material')
material.use_nodes = True
bpy.data.objects['Plane'].data.materials.append(material)

# creating some texture
nodes = material.node_tree.nodes
links = material.node_tree.links
principled_bsdf_node = material.node_tree.nodes["Principled BSDF"]
noise = nodes.new(type="ShaderNodeTexNoise")
color_ramp = nodes.new(type="ShaderNodeValToRGB")
color_ramp.color_ramp.elements[0].position = 0.5
color_ramp.color_ramp.elements[1].position = 0.52

links.new(noise.outputs["Fac"], color_ramp.inputs["Fac"])
links.new(color_ramp.outputs["Color"], principled_bsdf_node.inputs["Base Color"])

img = bpy.data.images.new('img',1024,1024)
texture_node =nodes.new('ShaderNodeTexImage')
texture_node.image = img
nodes.active = texture_node
bpy.context.scene.render.engine = 'CYCLES'
bpy.ops.object.bake(type="EMIT")

but again - this takes longer than a render, and I’m not sure where to get the pixel values from.

In this case, you don’t need too much samples for baking. The default of 1024 is way too much. If your texture doesn’t have a lot of small details that need to be over sampled (like very small noise patterns), you can get away with 1 to 32 samples per pixel. That will render very fast (in my laptop, a 8k texture, takes about 2 to 4 seconds to bake).

So you could add this to your script:
bpy.context.scene.cycles.samples = 16

Also, you don’t need the ‘PrincipledBsdf’ (specially if you’re baking ‘Emit’).
So you can plug the color_ramp directly to the material output.

#replace last link with:
links.new(color_ramp.outputs["Color"], nodes["Material Output"].inputs["Surface"])

After calling the bake operator, you can read the pixels from the img data.
For example, reading the color of the [512, 256] pixel can be done as:

w = img.size[1]
coord = (512 + 256 * w) * 4
pixel_color = Color(img.pixels[coord:coord+3])
# Color data type don't have alpha, but you can access it also with:
rgba = img.pixels[coord:coord+4]

You could also save the image to a file:
img.save_render(filepath='C:\\tmp\\texture_baked.png')

1 Like

Unfortunately using this with cycles is still slower than a full render with eevee. Even with sample = 1

bpy.context.scene.render.engine = 'CYCLES'
bpy.context.scene.cycles.samples = 1
bpy.context.scene.cycles.bake_type = 'EMIT'
bake_image = bpy.data.images.new("BakeResult", width=512, height=512) 
image_node = nodes.new('ShaderNodeTexImage')
image_node.image = bake_image
image_node.select = True
material.node_tree.nodes.active = image_node
bpy.ops.object.bake(type='EMIT')

takes 260 ms. while

bpy.context.scene.render.engine = 'BLENDER_EEVEE_NEXT'
bpy.ops.render.render(write_still=True)

only takes 80 ms - and I assume a fair chunk of it is just saving to disk.
Is there no API to access a rendered image without saving to disk?

Unfortunatly no. The buffer for rendered images is far more complex than a single image, and it’s kept private in the source code.

And baking can only be done with cycles? is there any way to make it faster?

The only way I can imagine, is to replicate all nodes in OpenGL and use the gpu module to render the texture to an offscreen buffer.

This is far more advanced than you might expect… If you download an older version of Blender (prior to Eevee), you can get the OpenGl code for most nodes (the older ones) from the material itself. But this option is no longer available (thought that glsl code would still work in current blender versions).

Thanks… that does sound overly complicated. I’m surprised there isn’t a more straightforward way to accomplish what I’m trying to do. Looks like I will have to live with the 100ms for each texture.