3D camera for post processing depth of field in blender. (help needed!!)

Recently I had the opportunity to mess around with some 22nd century camera technology capable of real time 3D reconstruction! Visit Elphel blog for more information |here| If you have no idea what 3D images are then check out some examples |here|
3D photos are way cooler than flat images: for example, depth of field (DOF) can be post processed in blender! (and it actually looks real), unfortunately I have some issues that appear to be fairly complicated (but solvable), so if you would like a logic challenge then give this a shot!
***Note: I was able to get a DOF result, but the method is slow, clunky, and impractical if many images need processing.

All of the example 3D images can be downloaded and imported into blender. Check out this link for instructions

Description:
-The ultimate goal is to effectively composite the 3D scene with depth of field.
-The main issue is that I cannot render shadeless without losing transparency data, I cannot simply set the material to emission either (setting material to emission would work as shadeless render).
-I was using Blender’s internal engine and openGL

***What I Know/(think I know)

  1. The imported mesh looks great when opened in blender, textures and geometry appear to have no issues.
  2. each individual surface has its own UV mapped texture, meaning that all 500+ surfaces have a separate unique image textures (with transparency)
  3. each individual surface has its own material (this is problematic because changing settings for 500+ materials is unrealistic)
  4. enabling “z” in the render passes tab includes depth in the render layers node (usually very effective for DOF when properly combined with a blur node in compositing)
  5. cannot render shadeless unless the material is set to emission (point 3 explains the material issue)
  6. adding a color pass in the render passes almost works!:mad: but the transparency on the image textures cuts a hole through to the background (all 3D information behind the transparency is gone)
  7. openGL render does not have as many options for compositing and it has the same transparency issue as in (point 7)

***Questions:

  1. Could I somehow bake all of the 500+ image textures onto one large image (preserving UV data)? this way I would have only one material to edit.
  2. Is there some method to render shadeless that I simply overlooked?

***My “ghetto” method:
Simply put, I had to screenshot the viewport gasp:eek: to capture the color data without losing transparency data. Then I rendered a black and white version of the scene with a gradient of light(near) to dark(far). This depth map was the strength input for the blur node in the compositor.
This post is already long so if someone is interested in a detailed description of my method then leave a comment!


This is the image I was working with, everything was in focus initially.