Context override bpy.ops.mesh.select_linked?

Context overrides are neat and useful.

But can you trick bpy.ops.mesh.select_linked to only operate on a specific mesh or trick it into only using a specific, designated vert/face as its ‘source’ instead of its standard behavior of operating on all selected components in edit mode? I’m guessing that the context overrides system won’t work for this because (I assume) that the edit mode mesh and the current selection are not part of the context but I figured I would ask anyway in case I’m missing something. (I might be missing something. It gets the objects from the view layer?)

For example if you had 5 separate objects in edit mode (or 1 object with 5 loose pieces), and each of them had 1 face selected, when you run select_linked the standard behavior is to select linked for all 5 objects simultaneously. But if you only wanted select_linked to operate on 1 of the objects without altering the selection of the other 4 could that be done?

I could iterate over each mesh to build a list of every selected component
selected_components = [v for v in bm.verts if v.select]
then run
bpy.ops.mesh.select_all(action='DESELECT')
then select only the 1 component that I’m interested in and run select_linked so that it only operates on the loose piece connected to that 1 component,
and finally re-select everything from selected_components

That would have the desired effect but that first step of iterating over every component in every mesh is a heavy operation on a high-poly mesh which is exactly what I want to avoid in the first place.

The alternative, I guess, is to write my own select_linked operator from scratch since the one built into Blender doesn’t have the functionality I desire.

I’ve tried solving this same problem myself, and I can tell you this:

  1. select_linked uses bmesh, and bmesh has no concept of ‘context’ so there’s unfortunately nothing to override. there are other mesh operators that take an ‘index’ argument (and will default to the selected face if it’s not set), but select_linked is not one of them, sadly.

  2. I can tell you from personal experience that ‘faking user input’ the way you’re doing (using list comprehension to cache your selection, etc) is already the fastest method, as slow as it seems. I’ve tried using recursions, queues, etc to essentially ‘flood fill’ search from a starting point- and it’s all significantly slower than using the select_linked op.

  3. The caching of your selection should be pretty quick- even on a mesh with 100k+ verts, list comprehension is extremely efficient. I’m assuming you’re seeing slowdown on restoring the cached selection once you’ve finished using select_linked. If that’s the case, I would recommend building a numpy array from your cached verts and then using numpy to set the select flag back to True. You’ll pay a very small cost for creating the numpy array, but setting the flags will be nearly instant.

And yeah, ideally we’d just have some sort of python API that could give us a list of components (any devs want to create a bmesh.ops.get_linked for us?), but until that happens hopefully this helps :slight_smile:

  1. I thought so. I did try some silly hacks but as expected they didn’t work.
  2. Bummer.
  3. Interesting idea. I’d only considered trying to build arrays to get the selection but not to set select back to True at the end. How would that work with bmesh since there’s no foreach in bmesh? (This is the first time I’m dipping my toe into numpy.)

Thank you, sir. You’re my favorite person on the forum. There’s always something to learn from a good testure comment.

Pretty pleeeeeease?

ah yes. that’s another thing- bmesh component iterators have no foreach, so you have to push the edit mode changes back to the object with object.update_from_editmode(). this could be prohibitively expensive if you’re working with multiple extremely dense meshes, but since the performance impact appears to be O(n) it’s not too bad on single high-poly objects. It really does boil down to what all your operator needs to do- if you’re doing multiple bulky operations on lots of vertices, it’s a very fair price to pay for the speed individual numpy operations gets you

for my purposes it was an acceptable hit- i measured it at around 40ms for an object with 150k verts, then only 3ms to build the numpy array, for a 300k vert mesh it clocked in around 80ms to push the changes to the object and 6ms to build the numpy array. If you’re just iterating over the components once, it’s faster to just do it in straight python- but if you need to iterate more than once, that’s where numpy shines.

so devs, if you’re listening, we’d also like a bmesh for_each component iterator please!

I’m including a quick numpy benchmark script I threw together, but first- the results from a 300k vert mesh:

python iteration, 1 operations:
----------------------------
total time: 51.80692672729492 ms

numpy iteration, 1 operations:
----------------------------
  - update_from_editmode: 95.16286849975586 ms
  - build the numpy array: 1.9741058349609375 ms
total time: 108.72936248779297 ms
python iteration, 20 operations:
----------------------------
total time: 1009.4561576843262 ms

numpy iteration, 20 operations:
----------------------------
  - update_from_editmode: 93.11532974243164 ms
  - build the numpy array: 1.995086669921875 ms
total time: 190.7522678375244 ms

As you can see from those results, for a single pass- normal iteration is faster, but numpy quickly pays for itself the more you use it. It’s probably worth mentioning that a mesh with 300k verts is completely unusable in edit mode, so this is a highly esoteric example- but since we’re talking performance it seemed like a worthwhile experiment.

import bpy, bmesh
import numpy as np
from mathutils import Vector
import time

obj = bpy.context.active_object
bm = bmesh.from_edit_mesh(obj.data)

def calc_average():
    av = Vector()
    for v in bm.verts:
        av += v.co
    av /= len(bm.verts)
    return av

def normal_test(count):
    print(f"\n\npython iteration, {count} operations:\n----------------------------")
    start_time = time.time()
    for i in range(0, count):
        calc_average()
    time_spent = (time.time()-start_time)*1000
    print(f"total time: {time_spent} ms")
    return time_spent

def numpy_test(count):
    print(f"\n\nnumpy iteration, {count} operations:\n----------------------------")
    start_time = time.time()
    last_update = start_time
    obj.update_from_editmode()
    print(f"  - update_from_editmode: {(time.time()-last_update)*1000} ms")

    last_update = time.time()
    v_count = len(obj.data.vertices)
    v_co = np.empty(v_count * 3, dtype=np.float32)
    obj.data.vertices.foreach_get("co", v_co)
    v_co.shape = (v_count, 3)
    print(f"  - build the numpy array: {(time.time()-last_update)*1000} ms")

    for i in range(0, count):
        av = np.average(v_co, 0)

    time_spent = (time.time()-start_time)*1000
    print(f"total time: {time_spent} ms")
    return time_spent

normal_test(1)
numpy_test(1)

normal_test(20)
numpy_test(20)

Ah, my nemesis. We meet again.

Funnily enough my “stress test” object is actually 333k verts. It’s what I measure my “worst case” performance against since it has some intentionally bad topology (it’s all quads, nothing nonmanifold, but some of the edge loops are ~34,000 edges long and crisscross all over themselves, lol).

That being said it’s not a typical use case and I only need to iterate once to get and once to set so I may simply stick with straight python. It only takes about 60-75ms to run this:

for v in new_sel:
    v.select = True

for 302k out of the total 333k verts (the test mesh is 5 loose pieces with 1 piece having the vast majority of verts).

Very insightful. Thank you for the tutelage.

Yes please!