I’m currently trying to write an import script, that takes a map of a voxel landscape and creates a 3d model out of it.
As a first, rudimentary setup, to see if it works I created a cube for every wall and a plane for every floor tile. The relevant code looks like this:
for t in tiles: if t.type == Tile.WALL or t.type == Tile.FLOOR: locx = bigX + t.x locy = bigY + t.y if t.type == Tile.WALL: bpy.ops.mesh.primitive_cube_add(location=(locx * 2, locy * 2, bigZ*2), enter_editmode=True) else: bpy.ops.mesh.primitive_plane_add(location=(locx * 2, locy * 2, bigZ*2 - 1), enter_editmode=True)
The map I tested this on had a total of ~20.000 wall and floor tiles which took at least 15 minutes. Of course there is much room for improvement (e.g. unnecessary faces and vertices when 2 walls are next to each other) but even then it seems to me like it would take quite long, especially since these maps can easily be much larger. Also note, that these maps do not only contain walls and floors, I just used these for testing.
Now, my question is, what would be the fastest way to automatically create these kinds of meshes?
Does the enter_editmode part maybe cause significant overhead? Maybe the primitive_xxx_add functions are for some reason inefficient? Would it help if I split the mesh into multiple objects? Should I first only calculate all the vertex coordinates and then create the mesh with a single function call? Or is it just a technical limitation of Blender, that such large meshes can’t be created quickly?
Any help is greatly appreciated.
Thanks in advance,
EDIT: I just found out, it can’t only be the sheer number of vertices, because creating a 1000*1000 mesh takes less than a second.