how to find the coordinates in image space of a given vertex using a given camera

hello. i’ve been trying to figure this out for a while but so far i’ve come up with nothing.

i have the coordinates of a point in world space and i need to transform this point to normalized device coordinates using the active camera (lower left corner is -1, -1 and upper right is (1, 1)). how would one go about this? if it helps, the camera is assumed to have a perspective projection.

thanks in advance

add an empty, move it to where you want it…
object properties will give the location.
i think that what you are asking… ??

i want to achieve this automatically with a python script. the idea, in pseudo code, is this:

#get a vertex in world coordinates, (x, y, z)
vertex = getTheVertexSomehow()

#get the active camera
cam =

#express the vertex in the normalized device coordinates
#(or image coordinates) that would result if rendering with the active camera
vertexNDC = this is where i'm stuck

I think you could work like this:

import bpy
obj =['Plane'] #naturally adjust!!
objpoints =[:]
wm = obj.matrix_world
for i,el in enumerate(objpoints):
   print( i, wm *

the vectors of the points have to be (left) multiplied by the objects world_matrix

that would just give me the position of the vertices of ‘Plane’ in world coordinates which is already given, as i described.

i need to use the active camera transform matrix and projection matrix to compute the position of the vertex in the coordinate frame of an image rendered using the active camera.


Sorry, read your first post wrongly …

I was just using the AddMesh Torus AddOn and I noticed it has an entire section for aligning the mesh to the current viewport. Perhaps there is some code in there that might help? It is part of the standard Blender download so you’ll have to dig around to find the .py file.

Hi technoestupido,

I suggest you look at the function location_3d_to_region_2d() in the file:


in your Blender directory. I think this is similar enough to the calculation you want.


This code snippet from hereworks for me in 2.61:

import bpy 
from mathutils import * 
from math import * 

# Getting width, height and the camera 
scn =['Scene'] 
w = scn.render.resolution_x*scn.render.resolution_percentage/100. 
h = scn.render.resolution_y*scn.render.resolution_percentage/100. 
cam =['Camera'] 
camobj =['Camera'] 

# Some point in 3D you want to project v
v = bpy.context.scene.cursor_location

# Getting camera parameters 
# Extrinsic 
RT = camobj.matrix_world.inverted() 
# Intrinsic 
C = Matrix().to_3x3() 
C[0][0] = -w/2 / tan(cam.angle/2) 
ratio = w/h 
C[1][1] = -h/2. / tan(cam.angle/2) * ratio 
C[0][2] = w / 2. 
C[1][2] = h / 2. 
C[2][2] = 1. 

# Projecting v with the camera 
p = C * RT * v 
p /= p[2]