Blender's UI library?

Hi, I really like the look of blender’s UI. The buttons, panels, widgets, etc. I know this is internal to blender and as I understand it it’s all rendered with OpenGL. I was wondering if the UI was written so that it could easily be extracted from the project and used by other projects? Would it be difficult to use for my own graphics app? Is there a name for the UI or is it just embedded deep into blender?

Thanks.

I looked into this very thing once. From what I found it’s pretty embedded. It’s still probably harder to code a clone on your own, but still, it’s not as simple as including a .h and linking to a library.

Blender’s UI library is pretty embedded in blender. However, you could extract certain components by doing the following:

  1. Most of the widget (buttons/sliders/panels) handling/drawing is in:
    /cvsroot/blender/source/blender/src/interface.c interface_draw.c interface_icons.c interface_panels.c
  2. Structs and so forth for these can be found in the includes folder
  3. As for the window/spaces/headers stuff… it’s scattered all over the place and gets quite messy.

Aligorith

Well, I asked that myself once. My Question now is, how will it be like AFTER the UI Code Refactor for Blender 2.5?

Tor_Gott: we’ll just have to wait and see :wink:

Aligorith

I don’t know, but I don’t imagine that much of the GUI code (buttons/screens/etc) will be changed very much, the plans so far have been more about an API to access tools. And even if the GUI code does get changed, the aim of modifying it will be to improve Blender, not necessarily to make it more library-like. I think that any efforts to make it more separated are going to have to come from the people that want it to be separated. Blender developers are interested in working on Blender, not other apps.

It seems theres so many other toolkits better suited for this kinda use
gtk, wx, fox, fltk to name a few

Somebody was working on a seperate blender UI toolkit “BlenderX” - but I dont thin they finished it.

Hello!

Sorry for bumping this thread. First of all, thanks for 2.43! It is really a good! (though I do have some bugs to report on windows, sculpt-related and iconv-related. But that is another question)

This is a reply to Broken who’s post put my brain in action :smiley:

I was just wondering how (not when) is the UI refactor planned? What are the main constraints that will guide this refactoring. What are the problems it should solve? What possibilities should it open?

In my spare time (sometime between putting my head down on my pillow and actually falling asleep) I give some thoughts at the “improving the UI” question. I focus on the “extensability” (spelling? does that exist?) aspect of the UI and of Blender. I see two possibilities ( there are probably a lot more):

  • A two-layer architecture: Blender’s core would take care of starting up and setting up an environnement for Python GUI components to run in, i.e: Blender handles the screen areas, and Python would draw what’s in the screen areas (through BGL or an extensive Draw module which mimics Blender’s current interface) Blender would give the Python layer a reasonnable access to the undelying structures to ensure that the screen and the data are in sync. This is a sort of extension of the SpaceHandlers concept.[LIST]
  • advantages:[LIST]
  • Easy integration of external modules into the UI. (well, I guess). They get registered on startup and appear in the window type selection menu. The new modules can themselves link to compiled code to do heavy tasks that are not included in the Blender source code already.
  • Easy extension of existing module-UIs since they would be coded in python.
  • disadvantages:
  • A lot of work needed to rewrite the whole UI into python, to expand the existing Draw module into a full blown BlenderUI toolkit, (or alternatively, write a BGL module that mimics the BlenderUI) and probably a lot more such as exposing the entry points of every operation to python (doExtrusion(), unwrapMePlease(unwrapMethod) etc…) though that could be adressed by the “API to access tools” Broken talks about.
  • slowdown? I’m not sure, since it’s OpenGL after all… [/LIST]
  • A modularised Blender (think .so or .dll). This would mean revisiting the datastructures I guess. I don’t know enough of this so I won’t write much about it. Although I read somewhere that this would be a huge task and that Blender wasn’t at all structured in a way it can be easily modularised.[/LIST]That was for the “extensability”… for Usability I do have some more ideas, but they are not part of the question here. I’m extremely curious in how the UI refactor is planned because I will be doing some coding in the next 6 months (Java swing, and probably Blender, XML, X3D and javascript) and I’m just very curious in learning how other people tackle their problems. By the way, feel free to comment on the concepts above! They’re not ment to be implemented, but to be criticized so I can learn from that!

Thanks
Dani

I would vote for the modular approach, as you can do a bit more with it. While I would love to see the interface pulled out as a separate library, I think the other big advantage would be to clean up blender itself, by modularizing a lot of the other functions.
Several people have noted that with all the (awesome) new features being added, blender is slowly heading towards bloatware. A person doing car models doesn’t need a motion capture utility built in, and a person doing low-poly game characters doesn’t need hair simulations. By modularizing blender, people can choose which tools to include. It also means that people can add new functionality by dropping a new dll into their plugin directory, rather than having to patch and recompile the whole program.