Tips for Coding Scalable Addons

I have made a couple addons for the Blender Market, particularly the Shot Matcher. I have updated the addon almost a dozen times, and as I’ve applied coding techniques I learned from college and on my own, I thought I’d share some tips on coding in a maintainable way (and Blenderartists felt the best place to post it). Keep in mind, this is not a “how-to” or beginner’s look at Python - just techniques to make coding easier to write in the long run, as well as keeping it structured and optimized.

And any other advanced developers, please share things you have learned as well!

Code Design and Structure

Honestly, instead of lecturing you for a page, I’ll point you to this website: Refactoring Guru. It covers refactoring and several design patterns, utilizing various programming techniques to make code easier to manage and maintain. It is better to prevent than to repair, so be sure to plan how to write your code. This will save you time down the road, and makes feature updates easier.

Managing Multiple Files (or multiple folders)

If your addon is small enough, putting all your code into one text file would be appropriate. However, for a medium-sized or bigger addon, you should separate it into files. If there’s lots of files, separate those files into folders!

As an organization principle, try to have every file and folder only do one thing. For example, instead of one file with multiple operator classes to exporting different formats, make a file for each operator, and put them all in an “export” folder. If needed, put any common dependencies (such as functions) in their own file and import them into the other files.

Even with an organized plugin, (un)registering may seem like a headache. Thankfully, Python can help with that. For multiple files in one folder, you can put your components into a list and have Blender just register the list:

ordered_classes = [MyOperatorClass, MyPanelClass]

def register():
    for cls in ordered_classes:
        bpy.utils.register_class(cls)

def unregister():
    for cls in reversed(ordered_classes):
        bpy.utils.unregister_class(cls)

For multiple folders, it is a similar idea. Take a look at Jacques Lucke’s code on importing multiple folders (if you use it, be kind and give him credit). Also make sure to include a “__init__.py” in each folder, even if “__init__.py” is an empty file. Python needs this to know which folders to recognize.

Reusing Functions for Similar Processes

As you expand your addon, you may notice code reused and duplicated throughout the files. Maybe a few of your operators use the same process (or a similar process with minor differences). Maybe a few panels show information the same way. I noticed it with some UI panels, displaying the same PropertyGroup of settings in the same format. I hated having the same code in two places. My answer? Put it all into a function. If there is anything unique, pass it as a function argument. I did this in my Shot Matcher addon:

#panel_utils.py
    
def draw_panel(itself, context):
    layout = itself.layout

    layout.prop(context.scene, 'layer_context', expand=True)
    #...

And then my panel classes are just this:

import bpy
from .panel_utils import draw_panel

class SM_PT_image_analyzer(bpy.types.Panel):
    bl_space_type = 'IMAGE_EDITOR'
    bl_label = "Shot Matcher"
    bl_category = "Shot Matcher"
    bl_region_type = 'UI'
    
    def draw(self, context):
        draw_panel(self, context)

class SM_PT_video_analyzer(bpy.types.Panel):
    bl_space_type = 'CLIP_EDITOR'
    bl_label = "Shot Matcher"
    bl_category = "Shot Matcher"
    bl_region_type = 'TOOLS'
    
    def draw(self, context):
        draw_panel(self, context)

Now I have the same information for multiple panels, without having duplicate code!

Managing Space Complexity

There are many ways algorithms can take up space: loading images or movie clips into memory, storing colors and presets, or caching simulation results. Some calculations require lots of data. To programmers, the amount of data is as important as how the space increases based on input.

For example, a hypothetical render of n objects would require n megabytes of memory, or even 10 * n megabytes. Wouldn’t that be nice? A more spatially complex render would require n * n megabytes. Not bad. But what if it was 2 ^ n megabytes? Twenty objects would require a terabyte of memory!! That is why space complexity is important; the memory used by an algorithm must be scalable.

Granted, this only becomes a worry for huge amounts of data, such as multi-dimensional arrays and media. But other stored or generated data can become unused, unnecessary, or quickly outdated. So keep an eye on variables and storage; make sure they do not grow too big, too fast.

Managing Time Complexity

I can use a example similar to above for time complexity. It would be terrible if a render engine, given n objects, renders in 2 ^ n seconds. Twenty objects would take twelve days to render! Iterating over lots of data and doing heavy calculations can slow down operators. This can add up, especially for users with low-end laptops. Because Blender usually waits for operators to finish, it is important for processes to run efficiently.

One performance issue with using lots of data is searching it. To increase search speed, I have found Blender’s collection properties useful. As long as you know an element’s key, searching is virtually instantaneous. The only requirement is that each key is unique. Some data in Blender is forced to be unique, such as image or movie clip names - use this to your advantage.

Keep the user experience in mind. One expensive process I’ve coded in my Shot Matcher addon is the video analysis, which reads multiple frames’ pixels. I knew that analyzing every frame was unnecessary, and could take several minutes to process. So I gave the user control of that level of complexity by presenting input for the range of frames to iterate over. That way, they choose the number of frames to calculate. In short, a simple way to manage time complexity is to leave it up to the user (but be sure to give helpful default values to guide their decision).

Another method worth considering is to exchange accuracy for speed. One component of Blender does this very well: Eevee. We think accuracy is always essential, but users only need so much (and this varies from user to user). If a somewhat inaccurate algorithm satisfies the user’s needs, it may be worth leaving alone.

If the process takes a long time even for fast computers, consider designing it to be handled on multiple threads or processes. Python has plenty of documentation on how to do this; be sure to visit Blender’s notes on threading and make sure you clean up threads and processes properly.

Code Comments

People get in strangely heated debates as to whether code should be commented - from “everything should be commented” to “it needs to be cast out like the default cube”. Regardless of what point on the spectrum you are on, your code should still be readable. That means being reasonable with naming of functions and variables, indentation, and overall structure. Comments should not need to explain what the code is doing. However, comments can explain why the code is doing what it’s doing. For example, you may be perusing your old code to find a strange set of arithmetic. Although you see the math, you don’t know why it’s there. A helpful comment could be # this ensures the number is within the valid input range.

Again, I’d love to hear others’ thoughts! What workflows do you use to develop addons?

4 Likes

Hey, I remember you mentioned writing this article on the creator slack. I always wondered if you finished and published it. Happy to have stumbled upon it today. Some really great tips. Being a self-taught blender addon developer, your article is very helpful for me at this moment.

Specially the Refactoring Guru is a huge boon. Just heard about it for the first time, always wanted a proper resource for maintaining and refactoring medium size code bases. This is superb.

Looking forward to more resources and discussions from your side. Beautiful addon too! Keep up the good work!

Regards,
-Sayan.

Hey, Sayan! Thanks for the feedback about the article and addon. Yeah, refactoring is super helpful in making addons easy to maintain, fix, and scale.

No, I haven’t published it anywhere else - apart from LinkedIn, I don’t really have any domains I could post it to. Maybe I’ll petition the BlenderMarket leaders to post it on their site as an article.

1 Like

Just read your tutorial and I will definitely have a closer look at design patterns. When it comes to reusing code, I have two add-ons that need the same operator. Currently, I have to put both of the operators in both add-ons in case someone installs just one.

Do you see a way to prevent having the same stuff in different places ?

I know how to check if an Add-on is installed but that doesn’t help me in saving code.

if addon_utils.check("GLBTextureTools")[1]:

            save_preview_lightmap_setting = bpy.context.scene.texture_settings.preview_lightmap

            bpy.ops.object.preview_bake_texture(connect=False)

            bpy.ops.object.preview_lightmap(connect=False)

            bpy.ops.object.lightmap_to_emission(connect=True)

Hmm great question :thinking: I’ll assume these two add-ons must be independent, such that you cannot assume a customer always has both. That removes the option of having it in only addon A and the B calling A’s operator. The only other options I see are:

  • Creating a third addon C with the operator and its dependencies. Have C be installed along with both A or B addon (Blender relies on unique addon names, so C won’t be installed twice), and both A and B check for the operator and call it. If C isn’t there, A or B throw a clear error that the addon C must be installed (or ignore, depending on how essential it is for this operator to run). This option is based on the architecture pattern of microservices. From IBM’s definition: “Microservices (or microservices architecture) are a cloud native architectural approach in which a single application is composed of many loosely coupled and independently deployable smaller components, or services.” This works best if addon C can exist by itself, ie it has everything it needs without A or B. If C does need to communicate to the other two addons, it’s as loosely coupled as possible. However, if the operator is heavily dependent on the other addons and their data, that can make this option more unreasonable than the next option:
  • Just keeping the duplicate code. Is that not up to the ideal standard? Yes. For smaller pieces of code or code that’s never changed, it may not be worth the trouble of refactoring.

I’d recommend the first option, especially if this is something you’re continuing to develop and scale. It also makes it simpler to update just C’s operator - no need to update A or B. You know your code best.

1 Like

Thanks for the quick response!
Your solution would mean that someone has to install and update 3 Add-ons manually. I think that is to much effort, even if I implement the auto updater.

A more complex way would be some hosted external library (Add-on C) that the other Add-ons fetch from web. That way I would just need an interface to load the functions or operators and register them and the rest of the Add-on when the download has finished.Also this would be way easier to maintain cause the users don’t have to install the library in blender, I can just push it and all of my Add-ons will load the newest version

Do you think this can be accomplished?

For pure python functions it shouldn’t be too difficult but I don’t know about all that stuff that needs to be registered or use the bpy module.

Join everything to one Add-on seams like a good solution but I need multiple panels in the sidebar to have some separation and enough space :slight_smile:

If you can merge the addons together, that would also be a great solution - I didn’t know if your marketing and product setup would make sense with that. Even if you sell it as one product, I’d still try to loosen dependencies between them. Multiple panels is something I’ve been wanting to do - it seems better to just merge them as a developer, but as a user I realized separate panels is a better experience.
A web-based fetch will work - just be sure to handle if the user can’t get it (user has no Internet, the hosted library is down, etc.). I’d lean away from this if the product doesn’t rely on Internet already, but if your addons do, then the user won’t expect anything else.

Wanted to make one more addition since I have found it so helpful - unit testing. Blender’s heavy dependency on its own library to be open within Blender makes unit testing very clumsy. However, I came upon a Python library to help make this easier:

nangtani/blender-addon-tester: a test harness to enable pytest hook to allow addons to be tested inside a defined version of blender

It is still a very young library and took a while to understand and setup. However, the reward has been well worth it. Regression testing becomes a matter of clicking a button in my IDE - I can immediately know if my changes will break any existing functionality, without even opening Blender myself. It does run Blender without UI, so any visual changes would need to be validated manually, but it has saved me time immensely.

1 Like