The reason I write this is out of experience when trying to work with a lot of polygons in editmode. Noting how the tools seem to work, it would seem to me that all the bottlenecks won’t just disappear with the fast drawing code in 2.8.
My impression when using tools on very high poly meshes is that some seem to take every triangle into calculation when invoked (meaning that even the most basic tools become extremely slow to use on such meshes).
Now look at sculpt mode in comparison (and the GSoC branch adding PBVH to vertex and weight painting), look how fast those tools are and how fast they initiate. Could it be possible to bring that fast performance to every part of Blender by applying the PBVH concept where possible?
Now as implied, areas like editmode and armature deformation on very high poly meshes could go from super slow to super responsive if, like in sculpt and soon the painting modes, you have a system where the tool code only takes into account the faces that would be impacted by the operation (so an extrude, for example. would more or less not even acknowledge anything other than the selection and its immediate area). It is possible that it wouldn’t work as well if the area to be impacted is unknown (path select) or potentially extends far beyond the selection (proportional edit and bone deformation), but that could be just a matter of setting larger parts of the geometry in play and acknowledging such tool operation will be slower (due to more nodes being updated).
The question then is, is PBVH everything possible in Blender (for the 3D-specific parts at least)? It seems to be working miracles to where it’s been applied so far so I wonder if areas like object and edit mode can benefit too and whether this is a possible idea for 2.8 (so the speed of the tools match the speed of the new drawing code).
As part of the viewport refactor, unless there was a change of plan, Blender 2.8 will make use of BVHs for the viewport selection, which will of course make selecting faster on huge scenes. In this plan, each object will have its own default BVH, so I guess they could also be used in the various edit modes.
This was what I was asking about, whether or not it’s practical or possible to use a BVH for everything related to editing objects.
It’s good to hear about the per-object BVH in 2.8, now it remains to see if it can also be used to vastly accelerate the tools in editmode, pose mode, and (possibly) particle edit mode as well (since we’ll already have sculpt mode and the painting modes covered when that GSoC branch is merged).
Well yes, the rebuilds wouldn’t be too different than what is already happening in sculpt mode I guess, though I don’t know that part of the code.
On another note, I just remembered that Severin briefly worked on viewport BVH selection a few months ago. Not sure what’s the status, or if it was already merged in 2.8 or pending viewport API improvements and such.
BVH stands for ‘Bounding Volume Hierarchy’. It’s a tree in which the the bounds of the children of each node are fully contained in the bounds of its parent. Its purpose is to accelerates spatial queries (such as ray/object intersection) by skipping over elements that are guaranteed outside of the bounds of the query.
If your operation can make use of such acceleration, using a BVH may improve performance. However, building or updating a BVH has a cost of its own and different implementations/configurations have different tradeoffs. Maintaining a BVH that you rarely use will rather slow things down.
It’s good to hear about the per-object BVH in 2.8, now it remains to see if it can also be used to vastly accelerate the tools in editmode, pose mode, and (possibly) particle edit mode as well (since we’ll already have sculpt mode and the painting modes covered when that GSoC branch is merged).
If you want to know why some operation is slow, you need to profile the application to see which parts are slow. That’s likely going to be different for every tool.
Also, even with faster drawing, the BMesh data structure used in edit-mode needs to be converted to something the GPU can actually use. BMesh needs to be much more complex than the sculpting code, since it can modify the topology. For sculpting, you only need to update the vertex positions.
Last I read, the Dyntopo portion of sculpt-mode makes use of PBVH and allows you to change topology as well as add/subtract geometry (so it appears to me it can accelerate tools that don’t just change vertex positions).
BPR; Please don’t treat this thread as an opportunity to request features (this is about a possible way to increase the performance of existing ones and accelerating the workflow when very high polygon counts are involved).
I was making a general statement on why BMesh is always going to have more GPU conversion overhead than something simpler, this has nothing to do with whether the tools use a BVH.
You certainly can modify the mesh and rebuild/update the BVH. An update may be cheaper than a rebuild, but it may also deteriorate the quality of the BVH. The more time you spend on building/updating the BVH, the faster the queries tend to be. One size doesn’t fit all, that’s why you have the option to use Spatial Splits in Cycles, for instance. For modeling/realtime, the BVH should trade off quality for faster builds. For offline rendering, it’s rather the opposite.
Again, the question is what part of your modeling operation is actually slow? If it isn’t spatial queries, using a BVH won’t help. If the bottleneck is GPU-conversion (and your tool doesn’t handle it), then that’s what you need to look at optimizing.