Simulating Bayer Filter in Compositor?

I want to try and take the pristine full-color image sequences that Cycles is giving me and simulate the artifacts from a camera with a Bayer filter. Basically, a 640x480 camera gives you 640x480 color pixels, but it doesn’t actually have unique red, green, and blue sensors for each pixel. Usually they alternate in a grid pattern, with 1/4 of the overall pixels being sensitive to red, 1/4 to blue, and 1/2 to green. The camera or your software then interpolates to try and fake the other receptors to give you a decent full-color picture. There are some variations or alternatives to this, but that’s the gist of it. The result is that on the edges of objects or transitions between colors, if you zoom in all the way, you can often make out pixels that have the wrong color. Anyway, I’d like to try and simulate this, and I’d prefer it to be as flexible as possible so I can easily try it at different resolutions and such.

My first thought was to create a compositing node with a Python script that would pick through the image and discard any color information that shouldn’t be present, and then re-assemble it using one of the common algorithms (bilinear interpolation etc). But it doesn’t appear that there is currently support for scripted compositing nodes?

My next thought was to use a Separate RGBA node to split out the colors, then apply a unique mask to each color channel to discard the extra color information, then do something (exact steps undecided) to try and do a passable interpolation to recover the lost color. The first problem I ran into was making the masks. I could make them in Gimp or something, but then I’d be committing to the resolution (640x480 or whatever). I messed around a little with creating a Cycles checker texture and scaling it appropriately, but it wasn’t clear to me how to make this flexible against such things as re-rendering at higher resolutions. I need to be able to suppress things at a pixel level.

So here I am. Does anyone have any other suggestions? I would love to just write a little Python compositing node that tweaks the pixels, but it appears that that isn’t possible at this time?

Thanks for any help or suggestions you can give me, I appreciate it!

-Dave

Try using the node > filter > Pixelate. This will remove anti aliasing for scaled images.


Thanks for your reply 3pointEdit. I actually ended up getting some fairly convincing results – I’d still prefer to do it as a Python script as it would be cleaner and probably faster, but I’ve gotten what I needed to so I’m moving on. Basically I created a Bayer color filter array image in Gimp (alternating red, green, and blue in 2x2 square tiles) that was larger than any frame I planned to render, then split that out into its RGB components and used them each as masks for the RGB components of the rendered image (so the red channel is effectively blocked from 3/4 of the pixels, green from 1/2, etc). This gives you a realistic Bayer-mosaicked image like a true camera sensor might pick up prior to processing. Then to de-mosaic it, I did a little filtering and shifting of the different color components before recombining (remember that there’s twice as much green). I think the effect looks pretty good.

Was this for illustrative purposes, to demonstrate the effect of Bayer filtering? I’m not sure what you achieve by deconstructing the reconstructing a captured image?

I am generating synthetic imagery for testing machine vision algorithms. A real color camera (with rare exception) doesn’t have the spatial resolution that most people think they do. Normally it does a good enough job at faking it for the eye to not catch it, but that information isn’t there in the camera to begin with, it’s lost and then guessed at. It’s kind of like when you create a JPEG of an image – it simply doesn’t contain all the information of the original, and although it’s usually good enough for the eye because the eye is more sensitive to intensity than color, if you are trying to do sensitive, high-quality image processing on it, you don’t want to mess with JPEGs. If you train up or tune an algorithm based on perfect rendered imagery, there’s a good chance it may not work when fed realistic sensor data that’s dirtied by the real world. (Sorry for the long answer!)