Abysmal performance when generating large numbers of primitives

I’ve been using a Structure Synth->Blender->Luxrender workflow for generative art experiments, but I would love to remove Structure Synth and generate the geometry directly in Blender; thereby streamlining the workflow.

Many Structure Synth structures are composed of hundreds of thousands of primitives (usually cubes or spheres). To assess Blender’s ability to handle generation of such large numbers of primitives, I whipped up this script which generates a grid of 1600 cubes:

import bpy, time

begin = time.time()

for i in range(40):
  for j in range(40):
    bpy.ops.mesh.primitive_cube_add(location=(i+i*2, j+j*2, 0))
end = time.time()
print('Execution took', end-begin, 'seconds.')

Using Blender 2.58.1 64-Bit on a 2.93GHz Core 2 Duo MacBook Pro, this script output:

Execution took 170.26286101341248 seconds.

(Edit: Further runs of this script have taken about 25-30 seconds, so the 170sec initial run must have been with a higher number of cubes, or I had something running in the background.)

This was a major letdown, as coming from Structure Synth, I’m used to generating hundreds of thousands of cubes or spheres without issue. The same code translated to Eisenscript (Structure Synth’s dialect) completes instantaneously:

40 * { x 2 } 40 * { y 2 } cube

rule cube{

Can the bpy code be optimized at all? Are there ways to get closer to the metal (so to speak) than the bpy.ops.mesh.primitive_xxx operators? Or is Blender just not well-suited to this task?

well, primitives are slow… running your code in edit mode will be 4x faster as it is only one big mesh
but try to go for dupliverts in this simple example, or particles, I mean, make use of blender strong points: this code will do the same as yours in no time…

import bpy, time
begin = time.time()
verts = []

for i in range(40):
  for j in range(40):
    verts.append((i+i*2, j+j*2, 0))

mes = bpy.data.meshes.new('mesh')
mes.from_pydata(verts, [], [])
new = bpy.data.objects.new('dupli', mes)

bpy.context.scene.objects.active = new
new.dupli_type = 'VERTS'

end = time.time()
print('Execution took', end-begin, 'seconds.')

import bpy, time

begin = time.time()
cube = bpy.data.meshes['Cube']

for i in range(40):
  for j in range(40):
    ob = bpy.data.objects.new('foo', cube)
    ob.location = (i+i*2, j+j*2, 0)
end = time.time()
print('Execution took', end-begin, 'seconds.')

Execution took 9.99325704574585 seconds.

Both answers are great improvements. I’m encouraged to continue on in my quest. Thanks!

good, anyway there is something wrong with your numbers, see times here on an old win pc

yours in object mode: 30.3429 seconds
yours in edit mode: 8.360 seconds
Uncle Entity’s one: 0.2967 seconds
dupliverts script: 0.0 seconds

Huh, you’re right. I just ran it again and it took about 25 seconds. Maybe the 170sec run was with a higher number of cubes, or I had something else running that was eating up CPU cycles.

One of the nice things about structure synth is the colorization it provides. This was always a problem in 2.49 because you were limited to 16 colors. In 2.5 this number is greatly increased to the thousands.

Here is my material additions to Uncle Entity’s code.

import bpy, time, random

def returnCubeMesh(passedName, width, height, depth):
    me = None   #fetchIfMesh(passedName)
    if me == None:
        vertices = [1.0, 1.0, -1.0,
                    1.0, -1.0, -1.0,
                    -1.0, -1.0, -1.0,
                    -1.0, 1.0, -1.0,
                    1.0, 1.0, 1.0,
                    1.0, -1.0, 1.0,
                    -1.0, -1.0, 1.0,
                    -1.0, 1.0, 1.0,
        faces = [0, 1, 2, 3,
                 4, 7, 6, 5,
                 0, 4, 5, 1,
                 1, 5, 6, 2,
                 2, 6, 7, 3,
                 4, 0, 3, 7,
        # apply size
        for i in range(0, len(vertices), 3):
            vertices[i] *= width
            vertices[i + 1] *= depth
            vertices[i + 2] *= height
        me = bpy.data.meshes.new(passedName)
        me.vertices.add(len(vertices) // 3)
        me.faces.add(len(faces) // 4)
        me.vertices.foreach_set("co", vertices)
        me.faces.foreach_set("vertices_raw", faces)
    return me

begin = time.time()
cube = returnCubeMesh('Cube',1.0,1.0,1.0)   #bpy.data.meshes['Cube']

# Turn off undo for this operation.
restoreState = bpy.context.user_preferences.edit.use_global_undo
bpy.context.user_preferences.edit.use_global_undo = False 

for i in range(5):
  for j in range(5):
    ob = bpy.data.objects.new('cube_no.000', cube)
    ob.location = (i+i*2, j+j*2, 0)
    mat = bpy.data.materials.new('mat_cube_no.000')
    r = random.random()
    g = random.random()
    b = random.random()
    mat.diffuse_color = (r,g,b)
    if ob.material_slots.__len__() > 0:        
        ob.material_slots[0].material = mat
        ob.material_slots[0].link = 'OBJECT' 
        obSave = bpy.context.object
        bpy.context.scene.objects.active = ob
        ob.material_slots[ob.material_slots.__len__() - 1].material = mat
        ob.material_slots[ob.material_slots.__len__() - 1].link = 'OBJECT'
        bpy.context.scene.objects.active = obSave

# Restore Undo.
bpy.context.user_preferences.edit.use_global_undo = restoreState

end = time.time()
print('Execution took', end-begin, 'seconds.')

@Atom linking materials to OBJECT is nice, however something is wrong with your script…