Feature/plugin request: Core Image filter integration

Idea: in Mac OS X, there exists an API called Core Image, which is used for fast 2D graphics filters. It can do things like bloom, blur, and distort an image all in real time. Why not take advantage of this in Blender? It could be a faster method for applying image effects in the compositor, or be used for faster procedural textures.

I tried writing a plugin for this myself, but failed. It looks like it may need integration with Blender itself. Maybe this could be a feature of Cocoa-Blender.

technically speaking true but this would only work on OS X and Blender is being coded so that you do NOT need individual code bases for different platforms.

Yes, and I agree that Blender should remain cross-platform. That’s why I think that this could be a plugin or a feature in a patched version of Blender. Alternatively, Blender’s plugin architecture could change to enable this type of plugin authoring (specifically, a one-time function call with all of the pixels, rather that a call for each). Maybe in 2.5?

What is blenders implementation like for plugin support?

You got texture plugins and compositor plugins, both conveniently located in the ‘plugin’ directory of your .blender folder.

The best bet on this, IMHO, would be composite nodes. You just have to get the image buffer (compbuf) into a format that Core Image can understand, let it do its magic and get the result back into another compbuf to pass on down the line. If one is really lucky Core Image might be able to just get a pointer to the existing pixel buffer and need no translating (which would probably kill the performance gains).

If I had a mac I could probably knock out a bunch of these ‘core image wrapper nodes’ in an hour or so but…

From what I can see of the available APIs, there is no easy way to do it. What I have found:
A C compositor plugin has one method that takes in a pixel and outputs a pixel.
A C texture plugin has no input (and no pixel coordinates either I believe), and so would be pretty useless.
Python can only make shader nodes, where the input is a single pixel and the output is the modified pixel, though it can access Core Image (through PyObjC, which comes with 10.5).
Core Image needs the entire image as input, since most filters are not just per-pixel (distortion, blur).
If anybody has any solutions, I’d like to hear them.

Is there anything missing from composite nodes compared to Core Image or are you just wanting a (alleged) speedup?

Blender’s image manipulation routines aren’t really that slow to begin with – or at least not slow enough to go to all the trouble of ripping them out and replacing them with something else.

I also kind of doubt you’ll find someone to maintain a mac only branch of blender so you’re probably on your own on this one. An argument could be made to integrate all the image functions (imbuf, compbuf) under one external library but then you run into the ‘mac only’ problem again if you used core image. I was thinking Imlib2 would be a good lib to use but I’m much too lazy to do anything about it.