Multi-threading has an effect, but it doesn't improve performance

Here is my threaded code. It balances the processing load evenly across all four of my cores (compared to the unthreaded code which only ran on one), but the overall processor usage remains at 25%! Am I doing something wrong here, or is this to do with Blender?

As far as I know, Python doesn’t support threads in a performance gaining sense (no native threading). I think it’s an issue with Python not blender.

Interesting concept, but I guess it still only uses a single core to spawn your threads. It is a bummer that python is only single threaded and that compositing is only single threaded and baking is only single threaded.

I think the only thing that is actually multi-threaded in Blender is raytracing inside the internal renderer. Even the parsing of the scene seems to be single threaded.

I’m getting a bit mixed up with ruby here. Python does have native threads but a little research seems to indicate that something called a global interpreter lock prevents more than one thread from executing at once.

That really does suck.

I think you can try using multiprocessing module which spawns real processes and gives you the ability to use like threads. I am not sure if it works through blender though. You can always try :wink:

I heard elsewhere that you end up with multiple instances of Blender when you do that. :stuck_out_tongue:

I’ve also experimented with this:

import threading, multiprocessing, subprocess, bpy, queue, time

retdict = {}

def runsub(arg):
    params = [bpy.app.binary_path,"-b",bpy.data.filepath,"-P",bpy.data.filepath+'.exec.py',arg]
    args = subprocess.list2cmdline(params)
    a = subprocess.check_output(args).decode()
    ret = eval(a)
    retdict[arg] = ret

class ShaThread( threading.Thread ):
    def __init__ (self,q):
        threading.Thread.__init__(self)
        self.q = q
    def run(self): 
        runsub(self.q.get())
        self.q.task_done()

print ('
'*3)

f = open(bpy.data.filepath+'.exec.py','w')
f.write("""
import sys,hashlib
a = sys.argv[-1].encode()
for i in range(100000): 
    a = hashlib.md5(a).hexdigest().encode()
print (repr(a.decode()),end="")
exit()
""")
f.close()

t1 = time.time()
q = queue.Queue()
vals = ['Blender','multithreading','test','by','aothms']
for i in range(len(vals)):
    t = ShaThread(q)
    t.daemon = True
    t.start()
for v in vals: q.put(v)
q.join()
print ('Threaded runs in %.2f'%(time.time()-t1))
for k,v in retdict.items(): print (k+':',v)

print ('
'*3)

t1 = time.time()
for v in vals: runsub(v)
print ('Sequential runs in %.2f'%(time.time()-t1))
for k,v in retdict.items(): print (k+':',v)

It works by firing up multiple instances of blender in background mode executing a python script that is written to disk. Command line arguments can be passed to the python script, the python scripts writes to stdout what is interpreted by the mother script. Tedious at best, but it does work :yes: (example is for blender 2.5.3)

Awesome. :evilgrin: . at least it works. Would be really nice if it wasn`t forking blender for each subprocess and just forked the interpreter for each process spawned through blender .But again … if the interpreter is well integrated then the blender thats actually spawned is just for interpreting and may not be too much overhead .

Good to know it’s possible, but I’m not that desperate just yet! :wink:

yeah, I chose to spawn blenders because that’s an interpreter of which you are sure it is installed and this way you get read access to the current scene in all subprocesses. but if you know the install path spawning regular pythons would be just as easy and wouldn’t require as much memory.

haha, good to hear that, I also abandoned this idea quite early. But still it would be possible to wrap all this code into a module. Get the python code of a function with f.code, write to a file and handle all this hassle transparently.