The problem is the script grabs the color information from bpy.data.images[0].pixels which is very slow. The calculation is not the problem but the conversion from the pixels array to an numpy array makes 99% of the loading time. So I decided before improving the search algorithm with a canny edge detection or something else I need to speed up the array loading process. And also I plan to make it work in the movie clip editor as well.
So how can I get the image data efficiently and also form a movieclip from the current frame?
I thought to just grab the filepath and load everything over the script and not over blender … but do I need additional libraries to load the image(preferred as numpy array)?
It took a lot of whining to even get the current slow Image.pixels method so I wouldn’t hold out too much hope…
…but, py-3.3 (or maybe it was 3.2) added new and improved buffer object support that wouldn’t be too terribly hard to implement for pixel data it’s just that I’m super lazy and don’t really feel like fighting that particular fight when I have a bunch of other projects to poke at. I actually had it mostly working with the old buffer object implementation but gave up after spending 2 days tracking down a bug to a comment in the python source that basically said ‘not yet implemented’ even though the docs said different.
It would be stupid easy to implement if someone were so motivated though, just needs a function in the blender python folder that takes in an Imbuf and outputs a PyObject* which wraps up the pixel data in a buffer object (which python has a handy function to do) then modify the makesrna Image.pixels access method to call said function instead of copying the pixel data into a py-list.