I’m working on a task that involves generating a large number of textures in Blender using Python (bpy). While I can automate the creation of shader nodes, I’m struggling to extract the resulting texture images efficiently. Currently, I render the images, but this involves saving them to files, which isn’t necessary for my task and adds significant runtime overhead.
Since my textures are 2D and only depend on the shader nodes (lighting, camera angle, etc., are irrelevant), I’m wondering:
Is there a way to render an image and access the pixel values directly from a script, without saving to a file?
Alternatively, is there another method to directly sample the output of the shader nodes?
And are there any recommended settings for rendering it with lowest latency?
The simplest way is to have a plane in your scene, and bake it with the bpy.ops.object.bake operator.
You could plug the texture directly to the Material Output node, and bake ‘Emit’, which should be rather fast, and doesn’t need to execute closure operations (as it does in a normal rendering).
Thought it’s possible to go even deeper and use the baking API from blender, it’s far more complicated, as you need to deal with buffers, object IDs, primitive IDs, UVs, derivatives, etc.
That sounds very interesting, can you elaborate how can I do that? From one attempt with baking it seemed that I have to use the cycles engine and that was much much slower then a full render.
This is the code I have:
In this case, you don’t need too much samples for baking. The default of 1024 is way too much. If your texture doesn’t have a lot of small details that need to be over sampled (like very small noise patterns), you can get away with 1 to 32 samples per pixel. That will render very fast (in my laptop, a 8k texture, takes about 2 to 4 seconds to bake).
So you could add this to your script: bpy.context.scene.cycles.samples = 16
Also, you don’t need the ‘PrincipledBsdf’ (specially if you’re baking ‘Emit’).
So you can plug the color_ramp directly to the material output.
#replace last link with:
links.new(color_ramp.outputs["Color"], nodes["Material Output"].inputs["Surface"])
After calling the bake operator, you can read the pixels from the img data.
For example, reading the color of the [512, 256] pixel can be done as:
w = img.size[1]
coord = (512 + 256 * w) * 4
pixel_color = Color(img.pixels[coord:coord+3])
# Color data type don't have alpha, but you can access it also with:
rgba = img.pixels[coord:coord+4]
You could also save the image to a file: img.save_render(filepath='C:\\tmp\\texture_baked.png')
The only way I can imagine, is to replicate all nodes in OpenGL and use the gpu module to render the texture to an offscreen buffer.
This is far more advanced than you might expect… If you download an older version of Blender (prior to Eevee), you can get the OpenGl code for most nodes (the older ones) from the material itself. But this option is no longer available (thought that glsl code would still work in current blender versions).
Thanks… that does sound overly complicated. I’m surprised there isn’t a more straightforward way to accomplish what I’m trying to do. Looks like I will have to live with the 100ms for each texture.