Blender 3D to 2D Perspective Transform to Match Geometry to Video Footage

Hi,

I have some 720p footage that has been object tracked and a plane assigned (object solver constraint) to the markers in 3d space (and it overlays the video properly).
I want to export the geometry per frame and transform it into 2d space to do perspective texture mapping in a live application (dynamic textures supplied at runtime).
In order to do this, I need to be able to match the 3d to 2d transform Blender is doing accurately (either exporting x,y,z from Blender or I’ll do my own transforms).
I just need to get the math right.

I am basing this on a previous Blender Artists thread - http://blenderartists.org/forum/archive/index.php/t-98626.html

My version is Blender 2.64.6 r51727 on Win7 x64.

My Camera is as follows:
>>> bpy.data.cameras[‘Camera’].type: ‘PERSP’
>>> bpy.data.cameras[‘Camera’].angle : 1.4886507987976074
>>> bpy.data.cameras[‘Camera’].angle_x : 1.4886507987976074
>>> bpy.data.cameras[‘Camera’].angle_y: 0.8847484588623047
>>> bpy.data.cameras[‘Camera’].clip_start: 0.10000000149011612
>>> bpy.data.cameras[‘Camera’].clip_end: 100.0
>>> bpy.data.cameras[‘Camera’].lens: 19.0 (mm)
>>> bpy.data.cameras[‘Camera’].sensor_height: 18.0 (mm)
>>> bpy.data.cameras[‘Camera’].sensor_width: 35.0 (mm)

So the upper left corner in 3D space is (-1,1,0).

The script is as follows:

import bpy, os
from mathutils import *
from math import *

o = bpy.context.active_object

locvec = o.data.vertices[3].co.copy().to_4d()
locvec[3] = 1.0
print( 'localvec: ' + str( locvec ) )

# this matches the "Transform->Vertex->Global" in Properties Panel
worvec = locvec * o.matrix_world
print( 'world: ' + str( o.matrix_world ) + '
worldvec ' + str( worvec ) )

# do the perspective transform (apply clipping)
view3d = bpy.data.screens[2].areas[3].spaces[0]
persvec = worvec * view3d.region_3d.perspective_matrix 
print( 'persp ' + view3d.type + ': ' + str( view3d.region_3d.perspective_matrix ) + '
perspvec ' + str( persvec ) )

# normalize the vector (apply perspective)
persvec[0] /= persvec[3]
persvec[1] /= persvec[3]
persvec[2] /= persvec[3]

print( 'normvec: ' + str( persvec ) )

halfw = 1280/2
halfh = 720/2

x = halfw + persvec[0] * 0.5 * 1280
y = halfh + persvec[1] * 0.5 * 720

print( 'screen: ' + str(x) + ',' + str(y) )

The output is as follows:

localvec: <Vector (-1.0000, 1.0000, 0.0000, 1.0000)>
world: <Matrix 4x4 ( 6.5002, -0.3093, 0.1295, -1.7361)
( 3.1379, 0.6487, 0.0196, 1.5013)
(-0.9110, 0.0275, 0.9914, -6.1830)
( 0.0000, 0.0000, 0.0000, 1.0000)>
worldvec <Vector (-3.3623, 0.9580, -0.1098, 4.2374)>
persp VIEW_3D: <Matrix 4x4 ( 1.2346, 1.3097, -0.0195, -0.6087)
(-1.0378, 1.0218, 2.9278, -1.2318)
(-0.6562, 0.6119, -0.4461, 11.0747)
(-0.6549, 0.6107, -0.4452, 11.2523)>
perspvec <Vector (-7.8480, -0.9044, 1.0326, 47.3308)>
normvec: <Vector (-0.1658, -0.0191, 0.0218, 47.3308)>
screen: 533.8801860809326,353.1213562935591

The pixel when I render out the frame is at position 61, 427 (based on image origin in upper left).

So is there something else I need to take into account.

Thanks,
Lorne