Blender project needs C/C++.

Hi everyone.

So, to make my question very clear, let me walk you through. I’m going to use Blender to simulate my human baby AI. I haven’t yet decided if I’ll go BGE or Cycles. I know you’re wondering whyyy Cycles but don’t ask me why. My latest way to help me decide between BGE and Cycles is to figure out which, or if both, will allow me to run parallel AI code. Later on in my project let’s say my code is so big that it needs GPU parallelism. Python is sequential. Oh oh. So how can I have my employed coders write C/C++ and use it for my Blender simulation? Can I call to outside of Blender and return results to Blender? Or use C/C++ IN Blender since Blender is C/C++ at its core? If

If I can’t use parallel code, I’ll probably try to go Cycles so I can wait for it to render, because BGE would lag and ruin physics etc. And yes I can make Cycles “work”. So long as about 2 things are true.

This is important because if neither allow me to use C/C++, I may decide Cycles, while if both allow it, I may choose BGE or Cycles, while if BGE allows it, I’ll likely choose BGE, while if Cycles allows it, I’ll likely choose Cycles.

You should have a look on cython. Cython enables you to to call c/c++ code inside blender. I’m not sure that you can run C++ routines in parallel but i’m not expert on the matter so you should have a look yourself

Python code is not sequential

Ok lets use the proper terminology here

Concurrency and parallelism

Concurrency is basically the ability to perform tasks at the same time. Python can do this via its threading module. Concurrent code is executing one code command at a time but its executes code from different tasks at the same time.

Parallelism on the other hand gives you the ability to not only perform multiple tasks at the same time but also execute multiple code commands at the same time. Python can do this using the multiprocessing module. This module essentially starts multiple python vms as diffirent processes. When you execute an application (start the application by double clicking etc) the OS assign it a process and that process is assigned to a specific CPU core. The multiprocessing module uses an API very similar to the threading module to make communication and controlling those processes relatively easy.

Parallelism in python does not stop there, Python comes with a huge array of libraries because Python is fundamentally a scripting languages for C/C++ which is the language the vast majority of Parallelism APIs use for performance reasons.

This is also the reason why you wont see many people use Python for parallel coding. No its not because of GIL. When it comes to high performance you will drop down to C/C++ no matter what language you want to use.

GPU parallelism can be even faster but it specialises in less versatile parallelism than the CPU hence is why its preferred for very specific kind of tasks. In the case you will be using Cuda or OpenCL again, Python can have access to those things as well.

Usually people who dont understand why Python works obsess over GIL but GIL is rarely a problem when it comes to high performance . Because Python dynamism comes with a high cost to performance , its usual to write the code in C++ and wrap it back to Python.

Contrary to what the same people may believe there is nothing inherently slow to python either. If you try to make C/C++ as dynamic you will be kissing your performance good bye which is why for example C++ are careful with the use of templates and smart pointers which try to emulate some of the dynamic features of Python.

Last but not least I have no clue what Cycles has to do with AI or BGE. In any case there is no simple answer to these matters, if you decided to code in C/C++ you can implement whatever solution you wish.

IF you want to run a real time simulation of high precision I would say use Unreal Engine, it excels at real time stuff and its far more powerful and efficient than BGE.