[Cycles Expert] - help me setup a virtual microlens array

Hey there geniuses!

Advice welcomed though professional expertise needed…

I’ve been tasked with building a piece of software that processes “plenoptic/light-field camera” data. Simply - in real life - this means we’d add a “microlens array” on top of our camera’s sensor to take unique pictures. A microlens array is a clear piece of glass with little bumps on it (like the eyes of a housefly) and there are a number of different ways we can arrange these bumps.


What I want to do is create a Blender file that has a virtual camera with a microlens array attached to it. With a few clicks, you should be able to swap out a range of different microlens configurations and environments to photograph.

The photos taken by the virtual camera must be perfectly photorealistic and mimic the real light-field dataset enough to be interpreted by my software. All resources here.

Who can best help:

Someone who understands how light and cameras work in Cycles inside and out. We can’t afford to kinda get a replica dataset, it needs to be indistinguishable a real plenoptic camera. I’ve thought to use Blender because I’ve built large photographic datasets using it in the past with good results.

If you think you can help:

Send me a message and we can arrange a deal. We will start with a one hour consultation to discuss the problem and your possible approaches. From there we’ll hopefully finalise a solution for mass-acquisition of data within a week.

Please reach out with any questions

Hi Tom,

As an experienced in same domain. i can surely help out.

contact me: rachel at cisinlabs dot com

I created something like this before using the blender compositor and gimp

You could use the compositor to create the effect then alpha the background might be able to do it in the node tree too and put a norm or bump map on it, but I might be misunderstanding what you saying. :thinking::no_mouth:

You have a good idea - what if some kind of filter could be added over the top? Problem is that as the engine raytraces, it must place each pixel in a very specific place based on its distance and direction. I think without a proper light transport simulation, the results won’t be photorealistic.

1 Like

I think I can create the lens but I’m working on another project right now might be 2-3 weeks before I can create and start testing

Hi, I would also be interested in using such an add on. Would it be possible to do the other way around. So having a “display” with microlenses over them and observe the resulting 3D image?

This topic was automatically closed after 90 days. New replies are no longer allowed.