Using Blender for 3d Surface Recon

Hello fellow blenderheads,

I would like to use Blender to test out some algorithms for 3d Surface Reconstruction I am working on for my degree.

What I need is a series of renders of an object from different orientations (cycles does a great job at that) and the corresponding projection matrix of the camera for each image (camera params + rotation + translation).

I was thinking about animating the camera’s motion around the object, then writing a script to render the animation and store the projection matrices and filenames into an xml.

My biggest problem is that I never programmed in python (am fluent in C++ though) and have no experience with the Blender Python API.

Could anyone point me into the right direction or at least where to start? Please :slight_smile:

Good news is Python and C are rather similar aside from some minor syntax differences. Two notable differences, however, are:

  • Python code blocks are defined solely by indentation while C used brackets { }

# C
if(a == b){

# Python
if(a == b):

  • Python doesn’t have predefined data types for its variables

int exVar; <== c
exVar <=== python

Python’s file IO is pretty similar to C’s standard file ops:

#something similar to the following if i recall
hFile = open(filepath, "w")
hFile.close( )

Anywho regarding the Blender API. Assuming you have a model and a camera that you already animated… snippets from below should be of use:

# find the camera in the scene
for iCam in
  if(iCam.type == "CAMERA"):

# iterate through frames in your animation in some way..
for fNum in range(50):
  # camera's world affine transformation (rotation/scale/translation) can be obtained via

  # can decompose it into individual elements if you'd like etc.
  loc, rot, scale = iCam.matrix_world.decompose( )

  # camera's parameters can be found in its <b>data</b> variable. below shows camera's focal length

  # can setup render params using the following. below is example of setting output file
  bpy.context.scene.render.filepath = "/home/bilbo_baggins/image_000"

  # can render using blender's internal renderer with
  bpy.ops.render.render(animation=False, write_still=True)

and what not :eyebrowlift2:

P.S. Blender’s console has a nice auto-completion feature accessed with CTRL+SPACE. I.e. type: bpy.context.scene.render. and push CTRL+SPACE to see all its member variables/functions

interesting project !

do you need to calculate the distance between camera and different parts of some 3D objects in scene?

not certain why you need to render here

render itself wont’ give you anything about the distances ect only the rendered results!

there are many scripts being written right now
i’v seen several for camera ect…

good luck with this project

did you look at the book from thomas

happy 2.5

Thanx a lot for these snippets, this should get me well started :slight_smile:

As soon as I get this running I’ll post some results :slight_smile:


Like promised, here are some first results.

With your help and from a colleague of mine I put together this code. It runs over a range of frames and for each camera in that frame renders the scene and extracts the intrinsic and extrinsic matrices (I removed the xml-text output code for clarity):

# import stuff
import bpy
from mathutils import *
from math import *

# get the intrinsic camera matrix
def intrinsic( cName ):
    # init stuff
    scn    = bpy.context.scene
    width  = scn.render.resolution_x * scn.render.resolution_percentage / 100.0
    height = scn.render.resolution_y * scn.render.resolution_percentage / 100.0
    camData =[ cName ].data
    ratio = width/height
    # assemble intrinsic matrix
    K = Matrix().to_3x3()
    K[0][0] = (width/2.0) / tan(camData.angle/2.0)
    K[1][1] = (height/2.0) / tan(camData.angle/2.0) * ratio
    K[0][2] = width  / 2.0
    K[1][2] = height / 2.0
    K[2][2] = 1.0
    return K

# get the extrinsic camera matrix
def extrinsic( cName ):
    Rt =[cName].matrix_world.copy()
    return Rt

# capture a frame
def capture( iPath, cName, frame ):
    # assemble filename
    filename = "image_" + cName + "_" + str(frame)
    # render the image
    bpy.context.scene.render.filepath = iPath + filename =[cName]
    bpy.ops.render.render(animation=False, write_still=True)
    image = filename + bpy.context.scene.render.file_extension
    # get the camera matrices
    K = intrinsic(cName)
    Rt = extrinsic(cName)
    return (image,K,Rt)

# perfrom an acquisition
def acquire( path, first, last ):
    for fNum in range(first,last):
        # go to that frame
        # iterate over all cameras
        for iCam in
            if(iCam.type == "CAMERA"):
                # capture the frame
                img,K,Rt = capture( iPath=path,, frame=fNum)

I’m still trying to “interpret” these matrices right, as I still get them pointing all over the place.

My goal was to get P = K*(Rt^-1) satisfy this projection rule:

Here is the first recon I got (really) of an arm model using a variation of a color voronoi material.

Will post a complete mesh when I correct this issues

Cheers :slight_smile:

And here’s the correctly aligned model: