I have written a very straight-forward script that imports many obj files one-by-one and renders them. The imported meshes have ~10k to ~120k vertices. After rendering, I completely remove the imported mesh (along with its data blocks) before importing the next one. However, as the for loop proceeds the importing procedure becomes extremely slow. I noticed that the import function starts behaving in weird ways and it takes a lot of time to import objects. I’m not sure why this is happening. Initially I was thinking memory issues are causing this but I think removing the data blocks should resolve memory leakage. This happens even without doing any rendering or any operations on the imported objects. Here’s an instance of what the import function prints while importing a mesh:
( 0.0002 sec | 0.0002 sec) Importing OBJ '/data/e1f2651d55aecd7d8f2b6fca0ec9a39dn7a9/models/model.obj'... ( 0.0308 sec | 0.0306 sec) Parsing OBJ file... ( 1.8534 sec | 1.8511 sec) Done, loading materials and images... ( 2.0450 sec | 2.0426 sec) Done, building geometries (verts:72707 faces:137005 materials: 44 smoothgroups:0) ... ( 5.4944 sec | 5.4921 sec) Done. ( 5.4946 sec | 5.4945 sec) Finished importing: 'data/e1f2651d55aecd7d8f2b6fca0ec9a39dn7a9/models/model.obj' Progress: 100.00%
As more objects are imported the import function behaves slower and slower, even for simpler shapes (e.g. ~12k vertices) you will get something like this:
( 0.0002 sec | 0.0002 sec) Importing OBJ '/data/jjgd2e3f46f7cc1e8ba69a5e14689f7b974/models/model.obj'... ( 0.0266 sec | 0.0263 sec) Parsing OBJ file... ( 0.7060 sec | 0.6793 sec) Done, loading materials and images... ( 3.0993 sec | 3.0726 sec) Done, building geometries (verts:12668 faces:43914 materials: 28 smoothgroups:0) ... ( 18.6672 sec | 18.6405 sec) Done. ( 18.6673 sec | 18.6671 sec) Finished importing: '/data/jjgd2e3f46f7cc1e8ba69a5e14689f7b974/models/model.obj' Progress: 100.00%
However, if the same object with ~12k vertices is imported first I get something as follow:
( 0.0001 sec | 0.0001 sec) Importing OBJ '/data/jjgd2e3f46f7cc1e8ba69a5e14689f7b974/models/model.obj'... ( 0.0025 sec | 0.0023 sec) Parsing OBJ file... ( 0.5541 sec | 0.5516 sec) Done, loading materials and images... ( 0.5572 sec | 0.5547 sec) Done, building geometries (verts:12668 faces:43914 materials: 28 smoothgroups:0) ... ( 1.0660 sec | 1.0635 sec) Done. ( 1.0663 sec | 1.0662 sec) Finished importing: '/data/jjgd2e3f46f7cc1e8ba69a5e14689f7b974/models/model.obj' Progress: 100.00%
Here’s my code:
#blenderClass.py import bpy, math, timeit import numpy as np class Blender(object): def __init__(self): self.bpy = bpy self.scene = self.bpy.context.scene self.scene.render.use_sequencer = False # Some memory management self.scene.render.use_free_image_textures = True self.bpy.context.user_preferences.edit.undo_steps = 0 self.bpy.context.user_preferences.edit.undo_memory_limit = 60 self.bpy.context.user_preferences.edit.use_global_undo = False def setupScene(self): self.removeCamera() self.removeMesh() self.bpy.ops.object.camera_add(location=tuple(1, -0.5, 0.3)) self.pointObjTo(self.scene.objects.active, (0.0, 0.0, 0.0)) # My objects are all centered on (0, 0, 0) def render(self, objPath): self.bpy.ops.import_scene.obj(filepath=objPath) self.removeMesh() self.removeDataBlocks() def removeDataBlocks(self, removeAll=False): # Removes unlinked data blocks and prevents memory leakage for block in self.bpy.data.meshes: if block.users == 0: self.bpy.data.meshes.remove(block) for block in self.bpy.data.materials: if block.users == 0: self.bpy.data.materials.remove(block) for block in self.bpy.data.textures: if block.users == 0: self.bpy.data.textures.remove(block) for block in self.bpy.data.images: if block.users == 0: self.bpy.data.images.remove(block) def removeMesh(self, layer = -1): for obj in self.scene.objects: if obj.type == 'MESH': obj.select = True else: obj.select = False self.bpy.ops.object.delete() def removeCamera(self): for obj in self.scene.objects: if obj.type == 'CAMERA': obj.select = True else: obj.select = False self.bpy.ops.object.delete() def pointObjTo(self, obj, xyzTarget): # This function operates directly on the input object (obj) from mathutils import Vector xyzTarget = Vector(xyzTarget) direction = xyzTarget - obj.location rot_quat = direction.to_track_quat('-Z', 'Y') obj.rotation_euler = rot_quat.to_euler()
And this is how I run the code:
#main.py import blenderClass import Blender blender = Blender() blender.setupScene() objPaths = ['obj1.obj', 'obj2.obj', 'obj3.obj', 'obj4.obj'] for objPath in objPaths: blender.render(objPath)
Unfortunately I cannot monitor the system resources in a precise way (I am running this on a server) but I’m afraid that the import function does not let go of some of the resources or somehow memory is getting filled up. I tried importing many shapes into Blender in my Desktop computer and executed the data block removal function after removing the meshes manually. My guess is doing that reduces the memory consumption to something around 10MBs, even if it was 400MBs when I import many 3D shapes. If you want to try out the code above, maybe an easy solution would be to use primitive shapes such as spheres, cubes etc and subdivide them so that they will have lots of vertices (maybe ~50-70k) and store them as obj. I think having around 10-15 obj files should work. Things start getting slow pretty quickly after importing the 3rd or 4th object for me.
I’m not sure if this is related but the way I’m calling Blender functions is not by calling it in the background. Instead, I have manually complied Blender 2.79 as a Python module and import its API through import bpy in the Python installed on my machine.
Although I am fairly certain that removing the data blocks frees up the memory but I have also tried using Python’s garbage collector and it does not help.
Does anyone know what I can be doing wrong? I am very confused …