Subdivide modifier makes selection take longer but it shouldn't...

As you know any object with a lot of polys or subdivisions will take longer to select, and open subdiv makes this worse, though it obviously makes everything else faster and more responsive.

Thing is… using the outliner to select is instantaneous in the same circumstances…
What’s going on here, and why is selecting via the viewport computed a different way, I mean it MUST be if the outliner is perfectly fine with the same result… On that train of thought why can’t selecting in the viewport be done the same way, as a shortcut to selecting in the outliner? This is just common sense.

Perhaps I’m stupid or ignorant, but it seems pretty simple to solve this problem.

This is not a bug report type thread or anything (looking at you Fweeb), I’m simply asking why this is why this is and am wondering what can be done to make this better if not the seemingly simple solution. I want to know what’s going on here and what plans are if any.

It’s not the selecting itself that’s taking longer, it’s the figuring out what to select.

If you select something in the outliner, it’s a question of which of a handful of rectangles your mouse cursor is in.

If you select something in the 3d view, it’s a question of which of potentially millions of polygons happens to best match the projection of your mouse cursor coordinate into the scene.

There are various ways to approach this, with various tradeoffs. You can do it very efficiently by rendering all elements with an “ID color” and then reading back that color to figure out which screen pixel corresponds to which object. This method has limitations that the current system doesn’t have, however. For example, you can only ever select the front-most primitive, which is why it’s not useful for edit mode.

My assumption is that Blender just uses the same selection code for everything, even though for object selection a simpler and more efficient method would be sufficient.

Short version: the outliner doesn’t have the possibility for overlapping objects, so when a user click in the Outliner, it’s pretty easy to deduce exactly what’s being clicked/selected. In contrast, clicks the 3D View need to go through some logic/mechanism to determine what exactly is being clicked. The mechanism that’s currently in use is very old (and deprecated, IIRC) and not as efficient as more modern approaches. This is a big reason for the viewport development we’ve been seeing.

There’s actually a setting for this under System->Selection. There’s the old OpenGL selection buffer (deprecated) method, the new OpenGL occlusion query method and then there’s “Automatic” (the default).

I’d assume that “Automatic” uses the newer method on hardware with occlusion queries (i.e. anything that isn’t ancient). To be sure, you can try to force occlusion queries and see if that improves things.

Ok, this is starting to make sense. Am I to understand as well that this is being addressed? Because the currently system is pretty bad. It seems like some ID system would make a lot of sense like what BeerBaron said.

You seriously should make sure you use Occlusion Queries where possible, because the old method is about as primitive and as slow as one can possibly get in 3D software (high-poly meshes can take maybe half a minute to actually select).

I did see in the commit logs though what appears to a branch devoted to overhauling the selection code (probably as a part of the 2.8 project).

So I switched it from automatic to Occlusion Queries and the problem has been significantly reduced. Strange it picks the slower one on automatic…

Occlusion query selections take 128ms on a 48 million poly object here in a scene with nearly a billion total (instanced) polygons. You’re either not using the newer selection method, or are one some kind of ancient computer.

I don’t kown about this so I tried and it also improved selection speed. I also wonder why it picks the slower one as Occlusion Queries seems to work better. My computer is a LGA 1366 platform from 2009 and my graphic card for display is a GTX 580 3Gb.

Wow, that is weird indeed. I have quite an up date system, X99, 5930k, 3xGTX1080, latest drivers, Linux(eOS). And still, the automatic select seems to pick OpenGL Select, which is like 10 times (or more) slower than the other one. Might even be a bug? I mean, what system specs do I have to have so Automatic picks the faster one?

Looks like a bug to me. In “gpu_select.c”:


bool GPU_select_query_check_active(void)
{
    return ((U.gpu_select_method == USER_SELECT_USE_OCCLUSION_QUERY) ||
            ((U.gpu_select_method == USER_SELECT_AUTO) &&
             (GPU_type_matches(GPU_DEVICE_ATI, GPU_OS_ANY, GPU_DRIVER_ANY) ||
              /* unsupported by nouveau, gallium 0.4, see: T47940 */
              GPU_type_matches(GPU_DEVICE_NVIDIA, GPU_OS_UNIX, GPU_DRIVER_OPENSOURCE))));

}

The comment indicates that occlusion queries don’t work on FOSS drivers on NVIDIA, but the code says to activate occlusion queries only on FOSS drivers (i.e. it’s the opposite of what you’d want).