C-Python Support finally lands

That user was me and I first want to point out that I am not a lawyer.
The argumentation is straight forward. Let’s assume I implement one of the features that was mentioned by MartinZ, e.g. decimation. I am implementing it completely independent of Blender, without relying on it at all. First I am implementing an API for the decimator that is released under a permissive license, e.g. BSD/MIT/… . Further, I am implementing an actual decimator which is based on the API, but I decide to make it proprietary.
Now you decide that you would like to use it in Blender and for that, you are implementing a Python addon. Because you are going to use the Blender Python API, you are forced to release it under the GPL. Your addon is communicating with the permissive open source API only.

Now, both of us did not violate any licenses. It is not allowed to bundle the proprietary decimator with your addon or Blender, because that would violate the GPL.
Nevertheless, you could allow users to download the proprietary decimator separately. Technically, the user is now violating the license.

As far as I can see (though I am still not a lawyer), C-Python does not change anything regarding the licensing. Once your code touches GPL code, you are force to release it under the GPL as well. It does not matter whether it is compiled or not.

Great explanation Dantus. However there is one point that everyone keeps missing with this scenario… if you wanted to keep your api private (on the decimator side), that is now impossible, since the python “glue” tying together the Blender api and your decimator api must be under GPL.

No, I was talking about the post of joahua that link to this comment

the user can do whatever they want without violating the license, as long as it isn’t redistributed.

I consciously made the API for the decimator open and released it under a permissive license. There is no issue in using that in the context with the GPL. And just because it is used in such a context does not mean that the original code automatically becomes GPL.

As has been mentioned, this doesn’t change anything WRT licensing/plugins.

It simply means if we want some of the addons distributed with Blender to have code moved to C/C++, we can do it.
(we would likely keep Python implementation around for reference)

An obvious use for this is having faster parsers for formats such as OBJ / PLY / STL,
where there is a lot of room for speed improvements.

I do wonder though if in some admittedly special cases just making use of pythons multi threading and processing modules wouldn’t help with performance, though? I mean more scene than individual model importers.
https://www.ploggingdev.com/2017/01/multiprocessing-and-multithreading-in-python-3/

Anyways, definitely excited to see more performative IO operations no matter how that is achieved. :slight_smile:

Python’s multi-treading isn’t useful for performance unless you’re waiting on IO (typically networking). Multi-processing can be, but then you need to copy data between the processes (in most cases).
Once C/C++ is used, these can multi-thread as any other C/C++ code does.

We wont prevent it. It’s just not the first thing to try for speeding up import/export,
having maintainable code is a big factor in writing format support - so if it’s fast enough, it’s unlikely we would spend more time to eek out an extra few % improvement at the cost of less readable/maintainable code. - There may be some exceptions of course.

Thanks for taking the time and answering.

I have a suspicion that most time spent waiting in OBJ export/import is actually IO bound, nothing to do with Python performance. Sometimes the import and export has taken a really long time that doesn’t make sense to me even when done with Python. Let’s say Python dealt with only memory and wrote the entire file in one go, and same for import. That would be a lot faster than reading and writing the file piecemeal. Please do correct me if I’m wrong.

Async and await, executor etc is nice for allowing maintaining 60 fps while crunching heavy tasks,

popleft() and deque are also super helpful

Strongly doubt this unless you’re reading a file over a network or slow external drive.

IO is buffered by default in Python, so many reads/writes won’t cause significant performance loss.

I just tested importing a 23mb obj, it took 10.5 seconds, reading the same file line by line took 0.2 seconds.

eg:


import time
t = time.time()
print(len(list(open('model.obj', 'r').readlines())))
print(time.time() - t)

What I was thinking of is something like this:


import time
import bpy
import numpy as np

filepath = bpy.path.abspath("//Anvilhead.obj")

t = time.time()

regx = r"v\s+(\-?\d+.\d+)\s+(\-?\d+.\d+)\s+(\-?\d+.\d+)"
a = np.fromregex(filepath, regx, dtype=[('x', '<f8'), ('y', '<f8'), ('z', '<f8')])
    
verts = np.zeros(a.shape[0]*3, dtype=np.float)   
verts[0::3] = a['x']
verts[1::3] = -a['z']
verts[2::3] = a['y']

# write data, delete old if exists
if "testing" in bpy.data.objects: bpy.data.objects.remove(bpy.data.objects['testing'], True)
if "testmesh" in bpy.data.meshes: bpy.data.meshes.remove(bpy.data.meshes['testmesh'], True)
    
me = bpy.data.meshes.new("testmesh")
ob = bpy.data.objects.new("testing", me)

me.vertices.add(count=a.shape[0])
me.vertices.foreach_set("co", verts)

bpy.context.scene.objects.link(ob)

print('--end', time.time() - t)

This take only twice what readlines() takes. But is still only one order of magnitude faster than the Blender default OBJ import, while heavily stripping features. Most of what I need OBJ import/export is simple meshes. For example remeshing. Using external remeshing is a pain precisely because of how long the OBJ export/import takes. There seems to be no good way (that I know of) to foreach_set polygons, unfortunately. Almost, but not quite, for my needs.

Maybe C implementation will indeed be really good idea. Improving Numpy/Blender interchange won’t be wasted either.

Python popularity is mainly of how easily it integrates with C. So there is nothing new with this feature other than some extra convenience.

Blender Python can already use any CPython library and there is no need to build them with Blender and its probably a bad idea to do so as it would make updating them a nightmare.

My advice to people is that if you can do it outside blender or independent from Blender , then do it. It will make your life a ton easier.

This is also for closed sourcing code. If you want to close source code and you make it so does not depend on Blender directly but indirectly via a backend, then you can move the front end to a closed source licence. Of course if you use the front end in an addon it will have to be GPL licensed.

So essentially I am talking about closing source outside Blener because it will no longer depend on Blender code and as such not bound to the GPL license.