Speed up assigning material in material_slots[0].material from python script

I’m coding Blender scene’s change with a python code.
I have many same type objects on the scene that I traverse in a loop (in python script) and change material from one to another. I also modify hide_render property to True/False depending on some logic.
My code changes object as:

for i in range(0, 1000):
obj.material_slots[0].material = glass_material
obj.hide_render = True

I observe that this operation is quite time consuming so I wonder whether it’s possible to speed up this loop?
Is there some Blender mode that I can enable to speed up object’s update operations?
Thanks

I’ve tried to set the EDIT mode with
bpy.ops.object.mode_set(mode = ‘EDIT’)
before my operations, but no speed up was achieved.

Quick question: why are you using a loop from 0 to 1000? This feels arbitrary, and for loops are super slow anyway. Setting the for loop aside for now, you can do:


for obj in bpy.data.objects:
    if obj.type == “MESH”:

Which will give you all mesh objects, much faster than trying to do an arbitrary maximum cap :slight_smile:

Thank you for reply!
I observe that the slowliness of the material assignment into a slot is far greater than that of the for loop.
The 1 to 1000 is just an example. I have an array of refs to object’s (planes). I then iterate the array to replace each object’s material. Each object has one material slot with one material assigned.
Iterating over the bpy.data.objects will be overhead in my case, since I have other objects in the scene as well.

If I understand you right, you have an array of objects? In that case, you should do this:

for obj in <obj_list>:

    obj.material_slots[0].material = glass_material
    obj.hide_render = True

There’s no need for a range function, they’re extremely slow
Welcome to BA, by the way :smiley:

Thank you for reply.
The problem is with the overall time spent in the loop, not writing the loop itself. I’m using ‘for in’ in my code, and used the range() function just to express the number of objects to be modified (actually, I have them much more - 65000).
This line

obj.material_slots[0].material = glass_material

called 65000 times consumes a lot of time, so I wonder - is there a way to speed it up?
Enable some Blender mode that I’m not aware of yet that will not trigger any render/viewport updates until I finish modifying all the objects?

1 Like

Short version: nope.

When you change a material, everything recompiles. Shaders, viewport display (each material has a unique viewport display, even if you don’t do anything with it), etc. Actually, a more correct way of saying this is that when data changes, the depsgraph_update handler is called. There is no way to disable this handler, the dependency graph is hardcoded into the source. Doing so would be extremely dangerous anyway.

There’s an add-on for Blender that lets you change the material of all objects at once, and it’s a slow operation by necessity, but their code might be helpful to you. It’s called Material Utilities

Honestly, there’s got to be a better way to do what you’re doing, I’m not 100% sure of the intended use case but unless you very specifically need to change a bunch of unique materials on unique objects to one material, there’s for sure a faster way

2 Likes

Does your objects are different ?
I don’t think you can optimize material assignation, but you may setup your scene in a way that similar objects are instanced (Alt-D over Shift-D).
Put it differently, all these objects will then share the same mesh data and then you only need to assign material for one object, and the change will propagate to other instances.
Which may lead to a faster execution.

Needs to be tested in practice, and you also need a good way of finding instances to reduce the array.

Feel free to ask if something isn’t clear !

1 Like

sozap, thank you for reply
How can I achive Alt-D effect from python code?
I create my planes with code

        vert = [(x0, y0, 0), 
                (x1, y1, 0.0), 
                (x2, y2, 0.0), 
                (x3, y3, 0.0)]
        fac = [(0, 1, 3, 2)]

        pl_data = bpy.data.meshes.new("My_Mesh")
        pl_data.from_pydata(vert, [], fac)
        pl_obj = bpy.data.objects.new("My_Plane", pl_data)

I believe this code creates separate mesh for each plane, isn’t it?
Each plane has it’s own X,Y coordinates set, so my decision was to create meshes per plane.
Is it possbible to have one mesh that all objects will share?
Is it possible to have many objects in their unique position with one mesh shared to all of them?

Yes, you need to create pl_data once, and then loop the creation of 65000 pl_obj and always assign the same pl_data.

Yes, objects can have different loc, rot , scale, but still share the same mesh data.

If all your 65000 objects have got the same data, then you’ll have to assign the material in only one of them. The only thing you’ll have to loop through is the render visibility that is object dependent …

2 Likes

Then maybe you can do that at the creation stage to avoid looping two time

sozap, thank you for this idea!)
I have to try it and see how much performance gain can be squeezed.

1 Like

yes sometime optimizations lead to worse performance. :smiley:
Anyway I think sharing the data can make more sense here.

Because you have a lot of objects to manage it may not be enough. Maybe then one solution is to have less objects with more planes inside. Really depends on your needs , it’s hard to make a good advice without having the big picture of what you’re working on …

Good luck !

I noticed the obj.hide_render part is also very slow. You can leverage foreach_set to achieve incredible speeds.

You can use a regular python list or the optimised numpy module’s arrays to achieve even greater speeds. Of course you’ll have to tweak the logic a bit but try this script :

print(f"{len(bpy.data.objects)} objects")

start = time.time()
for obj in bpy.data.objects:
    obj.hide_render = True
print(f"for : {time.time() - start} sec")

start = time.time()
bpy.data.objects.foreach_set("hide_render", [True] * len(bpy.data.objects))
print(f"foreach_set : {time.time() - start} sec")

4320 objects
for : 1.016763687133789 sec
foreach_set : 0.0 sec

Of course you can’t do it with material slots since they’re essential a UI focused feature but it should help your script speed overall. Also watch out for print statements in your for loops, they take a LOT of time to process.

3 Likes

A little experiment (YMMV) :

start = time.time()
for i in range(100):
    vert = [(0, 1, 0), 
            (0, 0, 0),
            (1, 1, 0), 
            (1, 0, 0)]
    fac = [(0, 1, 3, 2)]

    pl_data = bpy.data.meshes.new("My_Mesh")
    pl_data.from_pydata(vert, [], fac)
    pl_obj = bpy.data.objects.new(f"My_Plane_{i}", pl_data)
    bpy.context.scene.collection.objects.link(pl_obj)
print(f"single data : {time.time() - start} sec")

start = time.time()
vert = [(0, 1, 0), 
        (0, 0, 0),
        (1, 1, 0), 
        (1, 0, 0)]
fac = [(0, 1, 3, 2)]

pl_data = bpy.data.meshes.new("My_Mesh")
pl_data.from_pydata(vert, [], fac)
for i in range(100):
    pl_obj = bpy.data.objects.new(f"My_Plane_{i}", pl_data)
    bpy.context.scene.collection.objects.link(pl_obj)
print(f"shared data : {time.time() - start} sec")

start = time.time()
vert = [(0, 1, 0), 
        (0, 0, 0),
        (1, 1, 0), 
        (1, 0, 0)]
fac = [(0, 1, 3, 2) for i in range(100)]

pl_data = bpy.data.meshes.new("My_Mesh")
pl_data.from_pydata(vert, [], fac)
pl_obj = bpy.data.objects.new(f"My_Plane", pl_data)
bpy.context.scene.collection.objects.link(pl_obj)
bpy.ops.object.select_all(action='DESELECT')
bpy.context.view_layer.objects.active = pl_obj
bpy.ops.mesh.separate(type='LOOSE')
print(f"separated data : {time.time() - start} sec")

single data : 0.05600118637084961 sec
shared data : 0.01999974250793457 sec
separated data : 0.12400054931640625 sec

2 Likes

Refering to depsgraph_update gave me a good bunch of reading, thank you.

1 Like