Python slowing down over time

I’m generating mazes in blender with a python script, and I’ve noticed it gets REALLY slow. The script works by adding cubes using primitve_add_cube() in copious amounts. I’ve noticed though that python can create about 100 ish cubes per second, but after several thousand, it slows to a crawl, around 5 per second. It would probably get even slower when approaching the hundreds of thousands of cubes or so which I am planning to do.

The script works first by generating the maze information, then it runs through a triple for-loop to place all the cubes in the right places. It is essentially this code:

for x in range(m.w):
for y in range(m.d):
me.update() # I have tried both with and without this line. It didn’t really change much.
for z in range(m.h):
# some stuff happens here to move the cubes to the right location

but it is pretty simple

        primitive_add_cube(location = (x, y, z))

I’m wondering if anyone has any tips on how to make it faster.

Here’s some examples of what I’ve done so far. The cube took around 2 hours to add all the cubes.

has been answered in some threads, but i can’t find any of them. So short answer:

primitive_add_cube is an operator, which causes a scene update. The more objects you add, the more has be to updated after every addition. So it becomes exponentially slower.

You can avoid that by using low-level API. If you want to add the same mesh (cube) again and again, simply copy the object. If you don’t be this to be a linked duplicate, also copy the mesh data:

import bpy

ob = bpy.context.object

for i in range(-100, 100, 3):
    copy = ob.copy() =
    copy.location.x = i

bpy.context.scene.update() # only once

# Move original cube out of the array of cubes
ob.location.z = 5
1 Like

CodeManX: Why does the operator version slow down, but not the low-level version? In both cases, we add the same # of objects to the scene. So it can’t be that the scene itself takes a long time to process, but rather something to do with the operator itself. Any insight? I’ve run into this before with MeshLayers (adding a large number of them gradually slows down).

Is it because the operator does a lot of “clean-up” or pre-/post-processing, whereas the low-level interactions simply bypass that step? Are we taking any risks by bypassing the operators (in general)?

Using low-level functions is fine, but you should read the docs closely so you don’t miss important function calls, e.g. Mesh.validate() after polygons.add(). In case of adding new objects, that is calling scene.update() once after all your actions. Operators implicitly run that update EACH TIME, which forces the dependency graph to do all required checks and tagging. It looks like it goes over all objects in the scene, thus:

  1. one object added, 1 object has to be updated
  2. another object added, 2 objects have to be updated
  3. another object, 3 objects

hence: the total number of objects checked is !n, whereas n = number of objects added

1 + 2 + 3 + … =

adding 10 objects

adding 100 objects

adding 1000 objects
499500 (!) objects need processing

That’s why i avoid operators whenever i can, and - of course - you get better control over the data.

Thanks! This is much faster. Is there any way to do it in edit mode? I would like to create it all as one object, or have the script add them to a group and join them into one object if possible.

Well you could use select all and the join operator; however, since you’re using cubes you’re going to have overlapping faces. While there is an operator to remove overlapping vertices, I haven’t seen one for overlapping faces.

If you want to remove all the unnecessary geometry, you may need to rethink your algorithm to remove inside faces.

Removing the overlapping geometry is easy.

  1. Join objects
  2. remove doubles
  3. select-> interior faces, delete
  4. select non-manifold, press f (very occasionally removing interior faces will remove an exterior if there is a 4-way junction)

This works every time, and flawlessly.

my formula was wrong, it’s not !n but sum(i for i in range(n))

if you don’t care about overlapping and that stuff, you could directly construct a new mesh. I recommend the Bmesh module, since it is easier here.

import bpyimport bmesh
from mathutils import Vector, Matrix

bm =

x = -5
while x <= 5:
    z = x*x / 2
    loc = Vector((x, 0, z))
    mat = Matrix.Translation(loc)
    bmesh.ops.create_cube(bm, size=0.5, matrix=mat)
    x += 0.5
me ="Cube Bmesh")
ob ="Cubes", me)