multi-core CPUs and python

i noticed recently that python is only using one core at a time to work. is there a way, maybe special builds, commands or hacks to make it using all resources? i don’t know much about how python manage things internal but maybe someone of you.

thanks in advance. :slight_smile:

interesting question. i was thinking about this too even if i don’t have a multi core cpu yet.

it seems like the python “global interpreter lock” (whatever this is?) prevents that python can use multiple cores. i am not sure though if i understood this correctly.

maybe this works differently with stackless python or pypy?

You may want to look into parallel python

there is no “switch” to let a programming language or execution environment use more than one processor. The algorithms used have to be parallelizable and then they have to be parallelized. This is typically a non-trivial task and pays only in really long running programs.
Python has threading modules which allow this at “language” level (well, that is standard library). If a program uses this, more than one processor should be utilized. But it makes writing programs harder to write and much more error prone.

for more info on threading, read this quicky article:

I don’t get where this comes from, but it’s not the first time I have read it. I find it much easier to use a few thread classes and separate a program up into parallel apps than to deal with blocking I/Os for example. I tend to use the QT threading libraries, they are just more complete than the standard ones, and have nice callback and wait functions etc.

Some stuff I have written would be hideous to use if I didn’t parallel it, and can’t think how I could do some things if I didn’t.

I was just thinking how could one of my importers use threading, and realized that it would be quite trivial:
At the moment it parses the source file, poking values into a class object until that ‘section’ of the source is complete, then it runs the import method on the class that writes itself to the blender scene. Just make the class object a sub class of the threading library of your choice, and instead of waiting for the import method to finish just start it running on it’s own and go back to parsing the source file. I’m wondering if this ‘harder to write’ thing is that there are so many people still writing essentially procedural programs, because OOP based stuff seems easy to thread like this.

Another thing I have noticed is that most of the tutorials I have seen to do with python and threading don’t deal with inter thread communication, and most of the time I fail to see a use in the real world for any of the example code they use.

I don’t get where this comes from, but it’s not the first time I have read it.

Have you seen a badly multithreaded app written by someone who doesn’t quite get what threads are for? It’s almost painful!

If you think about it properly, threads are fine and easy. If you don’t, it can be a real mess.

Oh right, things instantly become much easier when you ignore data dependencies and race conditions…unfortunately, they then stop working too :wink:

Since threads may access the same memory location concurrenctly, you have to prevent just that at many places to not cause undefined behaviour, and the more extensively you use critical sections, the more parallelism you remove. This has little to do with OOP in my opinion…you just can make errors and overhead less obvious there…
If you miss a race condition, your code may appear to work fine 999 out of 1000 times, but totally screw up the other time. And finding the cause for that is the “fun” of multithreaded programming…
For two trivial loops correctness may be easy to prove, but definitely not for non-trivial asynchronous processes…

And don’t forget that even i=i+1 doesn’t have to be an atomar operation, thread switching (if # of threads > # of cpus) or concurrent access to i (otherwise) can happen within this operation…

A lot of the good old proven algorithms aren’t so good for parallelization. Still a lot of academic grades to earn in the field of reinventing.

Python has threading modules which allow this at “language” level (well, that is standard library). If a program uses this, more than one processor should be utilized.
are you sure about this? if i understand this correctly then the python global interpreter lock makes using more than one core impossible with standard python.

No. That’s why I wrote “should”. I don’t know about this specific point of python (I am using it mainly via cgi). But this is a problem at implementation and not language level, right?

>>> def dosomething(*args):
	while 1:
		x = 5
		x = x * x

>>> import thread
>>> thread.start_new_thread(dosomething, (0,))
>>> thread.start_new_thread(dosomething, (0,))

Using Windows, and python 2.5.1 this shows a significant use of both cores.

Q. Multi-core processors will be standard even on laptops in the near future. Is Python 3.0 going to get rid of the GIL (Global Interpreter Lock) in order to be able to benefit from this feature?

A. No. We’re not changing the CPython implementation much. Getting rid of the GIL would be a massive rewrite of the interpreter because all the internal data structures (and the reference counting operations) would have to be made thread-safe. This was tried once before (in the late '90s by Greg Stein) and the resulting interpreter ran twice as slow. If you have multiple CPUs and you want to use them all, fork off as many processes as you have CPUs. (You write your web application to be easily scalable, don’t you? So if you can run several copies on different boxes it should be trivial to run several copies on the same box as well.) If you really want “true” multi-threading for Python, use Jython or IronPython; the JVM and the CLR do support multi-CPU threads. Of course, be prepared for deadlocks, live-locks, race conditions, and all the other nuisances that come with multi-threaded code.

so i guess simply using python threads isn’t supposed to help much? with forking off processes he means running a separate interpreter at each core?

Sounds exactly like that. May work for web applications that use different servers for different tasks, justs using different ports. With all the problems of splitting a web application to two servers, which are basically the same as the ones mentioned in the answer above for multithreading.

It is an implementation issue. Using IronPython (Python on .net) or Jython (Python on Java Virtual Machine) may be a solution for standalone apps, but as soon as it is embedded it does not work.

Btw, the problems mentioned arise of using multiple threads, which is possible in Python, and not of multiple threads running on different cores. This adds complexity at the level of implementing the interpreter, not when writing applications.

Python just lost something for me…

What kind of tasks in Blender does require paralel running on multiple cores? Are there some scripts that take significal amount of time to complete? The most performance hungry operation is rendering and it’s already multithreaded…

Do you mean Blender itself or Python scripts? For Blender, there are some parts that could use every bit of power (fluids. particles & physics).

And even some scripts could use this, especially scripts generating things by procedure. At the moment I am thinking of A.N.T.

But, as said before, that means a rewrite of the underlying algorithms so they can use muliple cpus, not only switching it on.

IMHO, making Blender run on multiple cores is better strategy, than to require rewrite of Python interpret. If you ask for paralel execution of scripts, Blender has to provide multithreaded access to its interfaces anyway (doesn’t matter if Python has GIL or not). But then we can easily add option to run multiple Python interprets in Blender to allow running scripts in paralel.

I think that Python should remain simple, because it’s just scripting language aimed for simple tasks (relatively). Run multiple Python interprets if you need.

Thats not how it works. If a script like A.N.T. should use the full power of a computer, it has to have this access, you can’t just run the script two times. And it’s just a sloppy implementation of the python interpreter that prevents this, there are a lot of languages, even script languages, that support this quite easily. Even Python does, when you don’t use the standard implementation…

Any script has to be rewritten to run in paralel, it doesn’t matter if it runs in multithreaded interpret or on multiple interprets. Blender Python interfaces have to be implemented as multithreaded in both cases too.

Running a script in different interpreters is MUCH more complicated than running different threads. Data exchange? Method calling? Singletons? Locks? Thread groups? Who starts the different interpreters at the same time? This multiplies the problems of thread programming. And then you have two instances of python running instead of one. Doesn’t sound like a good idea.

If a script is called that starts different threads, it doesn’t matter how the blender-python interface works. In fact, it works right now! You can start threads as you want. The only thing that doesn’t work now is python, which doesn’t allow to use more than one core/cpu - which is an interpreter implementation thing as there other python implementation where this works.

If you meant that the blender python api has to be really threadsafe - yes, that’s right.

I’ll try to make simple test with running secondary python interpreter in Blender to see, how easy it is and how much performance is possible to gain. I believe I’ll get some results after this weekend.