Page 1 of 8 123 ... LastLast
Results 1 to 20 of 151
  1. #1
    Member Herbert123's Avatar
    Join Date
    Apr 2009
    Location
    Vancouver
    Posts
    1,383

    why is gl_select still used to select objects in Blender?

    My main peeve when working in Blender is the INFURIATINGLY slow selection. Even with a moderately medium poly scene selecting objects and points in the viewport becomes unbearably slow, e.g. it can take up to 6 seconds to select objects with a 500k object.

    Selecting in the outliner is immediate and fast, but with more complex scenes it can be just as slow to find the object in question.

    With more complex scenes (tens of million polys) it can take up to 40 seconds or more to select objects. Blender will lock up during these times.

    Here's my hardware: core i7 920@3.6ghz, 12gb ram, ATI 5870 1GB, Win7, Revodrive 240gb. The machine is rock solid. This is no driver issue. Identical behaviour in Vista. Even with modded (and working) firegl v8800 drivers the same selection issue persisted.

    I've used glintercept (http://code.google.com/p/glintercept/) to check which opengl method Blender uses for selecting vertices and objects. It seems to utilize the gl_select method. Since I have been struggling with this issue for a year now, I have been reading up on this. I am no opengl coder, but gl_select is supposedly an incredibly slow method for selection, and more or less deprecated. There exist far more efficient methods to resolve selection in opengl.

    Now that I am working with very complex objects (modeling age of sail ship), I am seriously considering giving up on it. I do not encounter this issue in Cinema4d (v8.5) or Lightwave. Maybe this is Blender and ATI related - I don't care. My other 3d apps are unaffected. So Blender's selection algorithms are faulty, and can be improved.

    Please check:
    http://lists.freedesktop.org/archive...ne/012668.html

    Someone already worked on resolving this (and only spent a couple of hours on it), initially reducing selection times from 30secs to about a second. SO IT CAN BE DONE.

    I am surprised to find that no-one seems to care about this. Honestly, I am baffled that something seemingly as simple to fix as this selection bug (and yes, it IS a bug in my view) is still unresolved, even after the re-write of 2.5.

    The entire workflow revolves around selecting objects - if this basic core functionality is broken, the rest crashes down like a house of cards. IT IS UNWORKABLE for me at the moment. Anything over a couple of 100k's is utterly infuriating. At 3.5 million polys it takes 15 seconds to select an object.

    This is a heart-felt plea from an enthusiastic Blender user:

    Please, PLEASE OH PLEASE fix this behaviour / code. I am no opengl coder, otherwise I would have attempted to fix it ages ago. Someone, ANYONE out there with some opengl coding experience: please address this.



  2. #2
    Donating Member arexma's Avatar
    Join Date
    Jul 2008
    Location
    Austria
    Posts
    4,814
    I've encountered this as well, especially when visualizing construction plant prototypes and clearing up the mesh made from solid data, which fast goes into the millions.

    Rule of thumb by now is that GL_SELECT has a performance drop to about 1/10th.

    For better understanding, for what I know selecting stuff with GL_SELECT works rather "easy":

    You need 2 things OpenGL offers: GL_RENDER, GL_SELECT those are rendering modes (glRenderMode(GLEnum mode)) for OpenGL - thereīs a 3rd one, GL_FEEDBACK, but as the name suggest it doesnīt draw, it generates info
    Letīs imagine you got your 3D Scene and itīs being drawn. It happens in GL_RENDER. Now you want to select something.
    You switch to GL_SELECT. In GL_SELECT nothing is shaded or textured or anything, the geometry is simply "virtually" drawn and a "counter" incremendet for Objects drawn. OpenGL does not know whatīs an object, you have to define it before rendering your scene in GL_RENDER.
    So while in GL_RENDER you tell your OpenGL object_foo is about to be drawn, get out your geometry data and tell it the next object is due in time, this way you get a name stack for the objects in your scene.
    If you want to select something you switch to GL_SELECT. In GL_SELECT only things within the defined viewport are "counted", so the idea is, to simply restrict the viewport around your mousecursor, or simply on the pixel under the mouse. Now you draw your scene again, which should be lighting fast as there's no shading, texturing whatsoever and you get the number of "hits". Now you handle your hits and look in the name stack what objects where hit, of for that matter drawn under your mouse cursor.

    Now you know what objects are under your mouse - why it has such an performance hit? No idea - it should be fast as GL_SELECT only has a viewport of 1 pixel and only draws geometry, no textures, shading...
    It was first noticed on Radeon cards and said to be a driverbug, nvidia was not effected, but was lateron, so thereīs the rumor that it was a deliberate crippling to sell professional cards as you barely need it for gaming. Your V8800 seems to disprove that theory unless the driver is aware itīs not a real FGL-V8800.

    One technique I know to steer the ship around the cliff is to use good old ray intersection.
    You shoot a ray via gluUnProject through the mouse coordinates and check against intersections - never done it though. I guess you can write that intersection test in a shader and run it completely on the GPU.

    Another option is to set the viewport to 1x1 pixel, load the scene without textures or anything, and then draw the entities underneath with distinct colors. Now use glReadPixels and you get the color of your 1x1 viewport. Obviously thatīs the color of the object with the closest Z thatīs drawn and your "selected object"

    And guess what? glReadPixels is one of the things crippled in GF400/500 cards.. Itīs much slower than on GF200 series cards - which leads me back to one of my favourite topics, the deliberate crippling of graphics hardware via drivers or hardware locks to sell professional cards.
    I wouldnīt mind buying a Quadro for instance, if there was a driver that properly supported Blender, but as it is now.. meh.

    So yeh, ultimately there are much better methods than GL_SELECT, but there are a few problems:

    1) Youīll see it in the participation for this thread soon enough, not many people know or use OpenGL. Itīs better by now, but not long ago OpenGL was like the fightclub. If you know OpenGL you donīt talk about OpenGL. It was like a mystic religion and those who really knew it where not exactly chatty about it and unless you understand finite state machines and know your way around in linear algebra itīs unlikely you grasp it by just reading some books or tutorials.

    2) GL_SELECT worked, and it worked in Blender. Now it is slow. Blender is a FOSS project and like all other FOSS projects it tends to be somewhat slow adapting working code to new technology. Itīs more intresting to implement new stuff, than to maintain old stuff thatīs actually working just not with peak performance, or constantly adapt it.

    3) The issue might be in red captial letters on the whiteboard at the BF but thereīs simply no one to do the job. OpenGL is rather important for Blender and to overhaul it, it requires a lot of knowledge about the "few" lines of Blender code that exist and then chaning *everything*

    But I agree, thatīs an issue that has to be addressed. But for what I know one of the next things is to overhaul the scenegraphs to enable faster animation playback, maybe along with this itīs on the todo-list somewhere.
    My superpower? Common sense. It seems so rare these days, it has to be a superpower..
    "Computers are like Old Testament gods; lots of rules and no mercy.” - Joseph Campbell



  3. #3
    Yup... GL_SELECT and GL_FEEDBACK was deprecated officially as openGL 3.0 and removed from hardware (RADEON) since the x800 cards. That's is Radeon cards since that era does it IN SOFTWARE using the CPU. The nvidia guys removed it in the Fermi architecture and now is doing in software, and only supported as deprecated in the quadro line of cards.

    The workaround in the R600 drivers is what applications should do in software... As usual, it requires an OpenGL guru, and these are pretty hard to find, specially in the Open Source world...



  4. #4
    Member ideasman42's Avatar
    Join Date
    Mar 2004
    Location
    Australia
    Posts
    5,331
    Hi, a few points.

    - Optimizing select with opencl/cuda is all well and good but the geometry data still needs to be moved onto the card, some optimization structure built (which is normally the slow part) - before you can do a ray cast. Also this is using memory on the card unless you copy and throw it away every time. If you imagine playing back an animation of a character, every redraw invalidates the ray tree - so for deforming objects this probably wouldn't work great, unless you have some very cleverly written 3D tool where data lives on the graphics card.

    Probably it could work to have some GPU acceleration structure which is used for multiple purposes - not just selection else it would waste memory... and lazy initialize it so it doesnt kill performance when deforming meshes - for example.


    Had a think about how I'd go about solving this
    - not use opengl
    - 1st pass use object bounbox
    - 2nd pass would go over geometry and could be easily threaded (1 per object),
    if the ray tree structure blender uses on objects was initialized - use it, otherwise just do 2d projection and z depth tests.

    for selecting whats under the mouse this should be fairly easy/fast, however border/lasso select would need some more work, though I think its possible still.



    also, ctrl+rmb in object mode to select an object from the center, skips zbuffer slowness, no good in editmode of course :S.
    Podcast * dotfiles * My Wiki * Blender/Stackexchange
    ideasman42<at>gmail.com



  5. #5
    Member Psy-Fi's Avatar
    Join Date
    May 2011
    Location
    Frankfurt, Germany
    Posts
    1,033
    One thing: OpenGL occlusion queries. They can minimally replace the current way of doing things, and they are hardware accelerated. They require some recent GPU but not to the point where a 7 year old pc doesn't support them. The only thing I am not sure about is whether having multiple of them is good practice. I will post a patch here soon so people can test, if Campbell doesn't do it first . I will expect some testing though . In theory, selection should be as fast as drawing a frame, maybe a little slower.



  6. #6
    Member ideasman42's Avatar
    Join Date
    Mar 2004
    Location
    Australia
    Posts
    5,331
    Hey Psy-Fi, I was about to reply or edit my own post.

    Drawback of software/ray cast based select is that you have to guess at what opengl does.

    Advantages of using opengl select
    - When displaying textures, alpha 0 areas dont get selected.
    - Non geometry - names, arrows etc can be used for selection, lamps etc.
    - Clipped out parts of the view automatic dont get selected (Alt+B Clipping for eg).

    Advantages with software select
    - predictable, runs same on all systems, no driver support CRAP!, we had problems for years and I bet with a different method we still run into driver compatibility problems, though for occlusion queries, I cant say, never touched them.
    - we have control over the methods used - possibly add fancy options which wouldnt be possible with opengl.
    - Much nicer for macro record/playback since you can playback an action at any screen size and always get same results, even run headless mode with no GPU and do editing operations.
    - geometric accuracy - with opengl if a face is smaller then a pixel it wont get selected for eg, ray trace sel avoids this.

    - Psy-Fi, Im going to keep on with my current projects for now so feel free to write a patch using occlusion buffers , its most likely a lot less work then full ray cast support which would need to specifically account for lattice, armature, lamp, empty draw types - which could end up being a pain since we would need to make sure ray cast and draw have matching representations of each object type - meshes/curves/metaballs etc should be simple enough however.
    Podcast * dotfiles * My Wiki * Blender/Stackexchange
    ideasman42<at>gmail.com



  7. #7
    Member Herbert123's Avatar
    Join Date
    Apr 2009
    Location
    Vancouver
    Posts
    1,383
    Chiming in to say "thank you" from the bottom of my heart - I started this thread expecting no reaction, and was about to just... give up. Give up on Blender. But you guys gave me some hope. Since I can't be of use with opengl programming, if a donation would be helpful, let me know.



  8. #8
    Member Psy-Fi's Avatar
    Join Date
    May 2011
    Location
    Frankfurt, Germany
    Posts
    1,033
    Hey Campbell, sure agree that ray-casting will be predictable and better in some respects. I would do it if only I had some proficiency with ray casting in general (I may have to do it some day to earn some XP and levels ) but not now I am afraid between studies/other stuff there's not really time for that . Well then, I shall try it and maybe add some option under system like "GPU-based selection" for testing it. Herbert123 if you are willing to test please let me know your system so I can post a build on graphicall.org.



  9. #9
    Member Herbert123's Avatar
    Join Date
    Apr 2009
    Location
    Vancouver
    Posts
    1,383
    @Psy-Fi: Thanks - my system: core i7 920@3.6ghz, 12gb, win7, ati 5870. Catalyst v11.8, opengl v6.14.10.11005, driver packaging v8.881.



  10. #10
    Member Psy-Fi's Avatar
    Join Date
    May 2011
    Location
    Frankfurt, Germany
    Posts
    1,033
    OK, working on it as soon as I am finished I'll post a link



  11. #11
    Member Psy-Fi's Avatar
    Join Date
    May 2011
    Location
    Frankfurt, Germany
    Posts
    1,033
    Herbert123, first prototype is ready. It doesn't work for armatures in edit mode right now but everything else should work fine. Please load a heavy scene and try selecting stuff. The ideal performance is that selecting should take no longer than navigating the scene. The build is at

    graphicall.org/747/

    To enable the functionality make sure "gpu selection support" under user preferences->system is checked. If the option is grayed out that means your GPU does not support the feature, though your system is sure to support it.

    Campbell: I have discovered that selection buffers are used widely and with lots of hacks to support armature selection too. This may require a selection buffer-like interface that uses occlusion culling behind the scenes to solve better but first I'd like to see results from testers to see if this is indeed better, whether it crashes or hangs etc. and study the architecture some more.
    Last edited by Psy-Fi; 03-Dec-11 at 08:55.



  12. #12
    Member OL77's Avatar
    Join Date
    May 2011
    Location
    I'll tell you if you tell me.
    Posts
    1,469
    Can you share the patch? Some of us don't use Windows.



  13. #13
    Member Psy-Fi's Avatar
    Join Date
    May 2011
    Location
    Frankfurt, Germany
    Posts
    1,033



  14. #14
    Donating Member arexma's Avatar
    Join Date
    Jul 2008
    Location
    Austria
    Posts
    4,814
    Amazing stuff, I got to test is as soon as I am home.
    This thread is great, as well as the devīs participation and picking it up. Guess itīs because this thread has a certain level of professionality in it, other than "devs crank up the clicky-dicky and add the new feature I saw on youtube!!!111"

    @Psy-Fi: Basic raycasting really is a walk in the park if it comes to intersection testing, you should be able to read into it within a day, less if you're up to speed with vector math/linear algebra.

    Theoretically speaking it should not be necessary to run raycast selection soley softwareside. It should be possible to give the option for hardware accelerated selection via raycasting as it should be possible to write the raycasting+intersection testing as GLSL shader. In the "scene" there are plenty prods where they use shaders for raytracing, should be quite doable and fast if you "just" need to shoot one ray trough the mouse or a selection area and check for intersection with objects - no shading and lighting required.
    I donīt know though if dragging the scene data to to the device memory wouldnīt make it a futile attempt so that in the end it would be just as "slow" as pure software intersection.

    I wish I had more time or would know Blenders code better.. maybe during the xmas holidays I get some time to dig into the code... Iīd like to tinker around a bit
    Anyways, thanks for all effords so far and to come
    My superpower? Common sense. It seems so rare these days, it has to be a superpower..
    "Computers are like Old Testament gods; lots of rules and no mercy.” - Joseph Campbell



  15. #15
    Donating Member arexma's Avatar
    Join Date
    Jul 2008
    Location
    Austria
    Posts
    4,814
    I just tried your first build.

    My testcene is from an older project, 51 meshes, 7.2 million triangles.

    In the "Official 2.60" selecting meshes takes a few seconds, goint to editmode and selecting vertices is painfully, especially with "c".
    With your build selecting meshes is more less instant, one problem though is, it seems to have problems with the depth. Some parts are not selectable until I turned the viewport into random directions, it selects through the obvious object and selects mostly the groundplane, or selects random objects around it. And going to editmode either crashes blender (I guess because itīs a 32b build) and if I am in edit mode sometimes using select crashes, but if I use "c" to select vertices itīs laggy, but useable compared to the the default build.
    My superpower? Common sense. It seems so rare these days, it has to be a superpower..
    "Computers are like Old Testament gods; lots of rules and no mercy.” - Joseph Campbell



  16. #16
    Psy-Fi,
    Thanks for the patch, great work
    I've got a Win7.x64 build up on ga for testing. (http://graphicall.org/750)
    Cheers,
    ~Tung



  17. #17
    Member OL77's Avatar
    Join Date
    May 2011
    Location
    I'll tell you if you tell me.
    Posts
    1,469



  18. #18
    Member Psy-Fi's Avatar
    Join Date
    May 2011
    Location
    Frankfurt, Germany
    Posts
    1,033
    Arexma, thanks for testing. I think edit mode selection uses another method altogether for selection(the color based back buffer approach I think). The optimization works for object mode though since this is where glselect was being used. Crashes are not good so I will have to look at that for sure and selection being the same all the time is also not too good. I will improve the method somewhat though I fear that selecting the closest object on first try might not be possible easily. Can you please state the steps taken to reproduce the crash and post the blend that causes it?



  19. #19
    Member OL77's Avatar
    Join Date
    May 2011
    Location
    I'll tell you if you tell me.
    Posts
    1,469
    Crashes or not, it's still an improvement. I tested, an object that took 30 seconds to select (about 800K Poly Cloud) took barely a second with this patch. It didn't crash once, and I've been using it for standard work, and even so, having Blender lock up for 30 seconds or more is almost as bad.



  20. #20
    Donating Member arexma's Avatar
    Join Date
    Jul 2008
    Location
    Austria
    Posts
    4,814
    Unfortunately I canīt post the blend, itīs full of a clients prototype data worth a 2 digit million euro number, if the data left my machine Iīd better volunteer for the next mars mission
    It just was one of the most heavy scenes I had to use under production conditions around.

    Itīs odd, if your method does not effect edit mode selection, I wonder why your version is noticable faster in it - odd.
    However I think the crashes happened because your version was a 32b version. When going into edit mode I was at around 5GB total memory usage, assuming the 1.5-2gb you use in windows7 Iīd say Blender32 just ran out of memory.

    Iīll create a similar scene with 51 meshes/7.2m tris, just subdivided whatevers and see if it crashes there too, but like I said, I think it was just a OOM crash.
    My superpower? Common sense. It seems so rare these days, it has to be a superpower..
    "Computers are like Old Testament gods; lots of rules and no mercy.” - Joseph Campbell



Page 1 of 8 123 ... LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •