I did some investigation into the overhead involved in copying data to and from numpy arrays from Blender meshes. Python is quite fast itself when used properly but when you work with millions of vertices some operations might be done far more efficiently with Numpy (which comes bundled with Blender). However, you will need to copy a lot of data and in this article I investigated those costs and tried to find the optimal code to perform those copies. I have also provided sample code and full benchmark code so you can repeat these measurements and see if this might save you time in your specific situation. It is quite a long and technical article I am afraid (but it does have some pictures
@ambi: Thanks! Linus Yng also pointed me to the foreach_get and foreach_set methods and they are a huge improvement on the ‘classic’ approach. I updated the article (and the benchmark) to reflect it.
As for vertex color layers: the data in each layer is a bpy_prop_collection too so it should be possible to access it using foreach_get/ foreach_set, or is that not what you mean?
@varkenvarken: The problem is that vertex colors are determined by face loop indices, and my vertex colors are calculated by vertex, so something like this is required:
mloops = mesh.loops
colors = np.zeros((len(color_layer.data),3))
for poly in mesh.polygons:
for idx in poly.loop_indices:
colors[idx] = retvalues[mloops[idx].vertex_index]
colors = colors.flatten()
color_layer.data.foreach_set("color", colors)
@ambi: not behind a machine right now but I would guess most of the time is spent in
colors[idx] = retvalues[mloops[idx].vertex_index]
With mloops being a Python list the [idx] indexing is probably quite slow. But could you not first retrieve all the vertex indices for the loops with foreach_get (i.e. make mloops a ndarray instead of a list)?
Yeah, seems it’s a bit faster. Not a lot, since Python lists are dynamic arrays and the get is O(1) iirc. What looks to be actually the costly one is get_attrib from Blender data structure.
set_colors() went from 0.6 to 0.55. As this is the optimized version, at this point the inner calculation loop is where the most time is spent. The unoptimized one took the most time in reading and setting the colors.
Here’s the end result for anyone interested. By making a lot of assumptions how the data is ordered, I was able to skim off 75% of the time. Retvalues is color by vertex (vertex->color).
# write vertex colors
colors = np.zeros((len(color_layer.data),3))
mloops = np.zeros((len(mesh.loops)), dtype=np.int)
mesh.loops.foreach_get("vertex_index", mloops)
# FIXME: Making a lot of completely horrific assumptions on how
# the data is ordered on the Blender side of things
colors = retvalues[mloops]
colors = colors.flatten()
color_layer.data.foreach_set("color", colors)
Paste your code to some pastebin or Github and let us have a look at it. What are you trying to do? Which line does the error show? Look at C.active_object.data.polygons[0] in the Blender Python console to see what is available for polygons.
Mesh doesn’t seem to have a foreach_get, so I would assume that using this method to read entire polygon structures is impossible.
The documentation says about foreach_get: “This is a function to give fast access to attributes within a collection.” So I would think that getting an actual bpy_prop_collection with it is not something you’re supposed to do.
import bpy
import numpy as np
def read_polygons(mesh):
fastpolygons = np.zeros((len(mesh.polygons)), dtype=bpy.types.MeshPolygon)
mesh.polygons.foreach_get("polygon", fastpolygons) # <b>blender will err and say "polygons[...]' elements have no attribute 'polygon'"</b>
return fastpolygons
mesh = bpy.data.meshes[0]
#this classic method is ok
for var in mesh.polygons:
print(var)
print(dir(var))
#I expected this works like above classic one but has rapid speed
for var in read_polygons(mesh):
print(var)
print(dir(var))
extra warning: the docs also state that foreach_xxx will only work for attribs that are bool, int or float (or arrays of those) so getting any other type of attribute will probably not work
I rewrote my randomvertexcolors addon to use numpy and I get about a 50% reduction (from 3.1 seconds to 2.0 seconds on a mesh with almost 1M polygons) with the following code:
def execute(self, context):
bpy.ops.object.mode_set(mode='OBJECT')
mesh = context.scene.objects.active.data
vertex_colors = mesh.vertex_colors.active.data
polygons = mesh.polygons
verts = mesh.vertices
npolygons = len(polygons)
nverts = len(verts)
nloops = len(vertex_colors)
if self.usenumpy:
start = time()
startloop = np.empty(npolygons, dtype=np.int)
numloops = np.empty(npolygons, dtype=np.int)
polygon_indices = np.empty(npolygons, dtype=np.int)
polygons.foreach_get('index', polygon_indices)
polygons.foreach_get('loop_start', startloop)
polygons.foreach_get('loop_total', numloops)
colors = np.random.random_sample((npolygons,3))
loopcolors = np.empty((nloops,3))
for s,n,pi in np.nditer([startloop, numloops, polygon_indices]):
loopcolors[slice(s,s+n)] = colors[pi]
loopcolors = loopcolors.flatten()
vertex_colors.foreach_set("color", loopcolors)
else:
start = time()
for poly in polygons:
color = [random(), random(), random()]
for loop_index in range(poly.loop_start, poly.loop_start + poly.loop_total):
vertex_colors[loop_index].color = color
if self.timeit:
print("%s: %d/%d (verts/polys) in %.1f seconds"%("numpy" if self.usenumpy else "plain", nverts, npolygons, time()-start))
bpy.ops.object.mode_set(mode='VERTEX_PAINT')
bpy.ops.object.mode_set(mode='EDIT')
bpy.ops.object.mode_set(mode='VERTEX_PAINT')
context.scene.update()
return {'FINISHED'}
As you can see I retrieved all the indices from both loops and polys first and then did the assignment of the random colors by using numpy’s nditer(). Now I am not a numpy expert so I guess that instead of creating all those slice objects even better results might be possible by creating index arrays.
After some more tinkering, I can reduce the timing even more, to 0.8s, by doing away with the innermost Python loop that creates the slice objects and doing all the indexing in Numpy:
In effect we now have as many parallel loops as we have polygons. So now we can assign vertex colors (on my machine) to over 1 million faces per second (and that includes generating 3M random floats), which is not too bad I guess.
@Oyster: The idea with foreach_get and foreach_set is optimization. I suggest making the algorithm first in pure Python, because it’s a lot more readable and manageable, and after then if you need performance, going Numpy and foreach_get & _set.
@ambi: your script would assign a different color to each vertex (but the same color to all loops that share this vertex) if I read it correctly, which is not what I am aiming for, I want each polygon to have a uniform (but random) color, so I have to assign the same color to each loop of a given polygon (but different colors to loops that share a vertex).
BTW, I don’t see anything that could break although I guess there is no need to initialize mloops to zeros as it gets overwritten immediately.
small change suggested by Linus Yng, changing everything to 32bits gives another 2x speed increase (for my random vertex colors example. Just foreach_get / foreach_set gives a 14x speed increase)
Anybody knows how to generate an array of 32 bit random floats in Numpy without producing an intermediate 64 bit array first?
because random_sample (and related functions) do not take a dtype argument. I am not sure generating 32 bit numbers would be that much faster (most 64bit operations in numpy result in only a 40% penalty on my machine compared to 32 bit operations) but saving on a potentially very large temporary array would still be interesting.
You could use ctypes to allocate the memory and then numpy.ctypeslib.as_array to use that allocated memory as a numpy array. If you really wanted to, that is.