3D to 2D transformations

I’m trying to transform vertices from Blender (vert.co) to a 2d projection - can one of you wizards explain which matrices and in which order I need to multiply them? I’m trying
Window.GetPerspMatrixWindow.GetViewMatrixObject. matrix*vert.co, but it doesn’t look right…:confused:

I know somewhere I need to switch from world-units to device-units, but first things first.


Does this help?


I find Geom Tool indispensable - can’t imagine why this functionality isn’t included in Blender.

You’re almost right but you don’t need the viewmatrix (it’s already contained in the perspmatrix):
To convert a vertex v to screencoords you need to do the following:

# convert the coordinates of v to a 4D vector with 1.0 in the w-component
co = v.co.copy().resize4D(); co[3] = 1.0
# multiply with the matrices (it's vector * matrix - not matrix * vector!)
hsc = co * object.getMatrix() * Window.GetPerspMatrix()
# now you have unnormalized (homogenious) window coordinates
# the scaling factor is in the w-component (index 3) -> divide x,y by w (if you need z also to compare against a z-buffer value, do the same for hsc[2]
hsc[0] /= hsc[3]
hsc[1] /= hsc[3]
# now hsc[0] is the normalized x-coordinate, hsc[1] is the normalized y-coordinate (in the range [-1..1] ,  0,0 corresponds to the center of the window
# next convert to screen coordinates i.e multiply by half the window size and add to window center
w = Window.GetScreenInfo(Window.Types.VIEW3D)[0]["vertices"] # the window vertices
sx = w[2] - w[0] # size_x
sy = w[3] - w[1] # size_y
mx = 0.5*(w[0]+w[2]) # center_x
my = 0.5*(w[1]+w[3]) # center_y
x = mx+hsc[0]*0.5*sx # center_x + hsc_x  * half size_x
y = my+hsc[1]*0.5*sy # center_y + hsc_y * half size_y
# here we are - screen coordinates x,y

hope it helps!
This code is not optimized at all - just for illustration what to do. I’m sure it can be optimized quite a bit. Most apparent: If you need to convert more vertices, move constant stuff out of the loop (like retrieving the window size, center, …)


Fantastic! I’ll give that a try.


There’s also an alternative solution using OpenGL (faster for sure) - but i thought i’d show you the other version first:

# get all necessary matrices
om  = object.getMatrix('worldspace').copy()
vm  = Window.GetViewMatrix().copy()
vmi = vm.copy(); vmi.invert()
pm  = Window.GetPerspMatrix().copy()
wm  = vmi * pm # this is what opengl calls the window matrix
mm  = om * vm # this is what opengl calls the model matrix
#to use opengl we need to store everything in BGL buffers...
# get the viewport (similar to window["vertices"])
gl_vp = BGL.Buffer(BGL.GL_INT, 4)
BGL.glGetIntegerv(BGL.GL_VIEWPORT, gl_vp)
# move the matrices for ogl into buffers
gl_wm = BGL.Buffer(BGL.GL_DOUBLE, [4, 4], list(wm)) 
gl_mm = BGL.Buffer(BGL.GL_DOUBLE, [4, 4], list(mm)) 
# create buffers to hold return values
gl_x = BGL.Buffer(BGL.GL_DOUBLE, 1, [0.0])
gl_y = BGL.Buffer(BGL.GL_DOUBLE, 1, [0.0])
gl_z = BGL.Buffer(BGL.GL_DOUBLE, 1, [1.0])  
# do the projection using gluproject
BGL.gluProject(v.co[0], v.co[1], v.co[2], gl_mm, gl_wm, gl_vp, gl_x, gl_y, gl_z)   
# to read the returned values from the buffers we need to use indexing (even though the buffers have size 1 only)
print gl_x[0], gl_y[0]



Everything is working beautifully, but I need to do some poly splitting,depth-sorting, etc (I’m using the CrystalSpace engine for the more difficult math). I was doing the depth sorting fine with the homogenous coordinates (x,y,x,w), but none of the splitting functions use 4d vectors - I’ve been converting them to 3d vectors. The problem is, when I split a polygon, I now have two new vertices with 3d locations. How do I add the proper w-component to these, or should I do all this before transforming to homogenous coordinates? Would I need to transform them to camera-space coordinates first?


ps-or do I multiply them by the inverse of the Perspective Matrix, split them, and then transform them back?

Depends on what you’re trying to do.
I’m not familiar with the CrystalSpace engine. But from what you are writing it sounds like you’re preparing your scene for BSP or portal rendering (or just a painter algorithm?)
I’d do the polygon splitting in local or global space and transform the vertices afterwards. If you need to do the transformation first (maybe to check which polygons to split), it shouldn’t make much difference. Once you know which polygons to split, you can still do the splitting in local space, transform the new vertices and sort them into your depth-correct list.
…and BSP trees work completely in local space anyway - no need to transform to camera for building the tree.
Sorry if i cannot help you much here.

I didn’t understand. can you please make it simple for every one to understand how to use this script.


I’m trying to avoid the cost of a complete BSP tree by doing a depth sort at the same time (I’m calling it ‘BSP Sort-of’). That way I only have to check/split polygons that actually intersect. Its working so far, but I’m trying to get the splitting working for actual intersections.

By way of background, this is for a vector-rendering engine for technical drawing- because real-time output is not important, the BSP Tree seemed like overkill.



What exactly do you mean with “intersections”? A 2d-overlap test on screen?
For depth sorting you compare two polygons (say triangles): You need to check which one is in front of the other as seen from the camera. Therefore you can first check if they overlap on screen (a 2d-overlap test). If they do not overlap, the order of the 2 triangles is not important (or just undefined at the moment) - they can be interchanged. Their order only depends on the other triangles you haven’t inserted yet. If the two triangles overlap on screen, their order is fully determined. (but you can still run into troubles with cyclic overlaps)


Thanks, I think I can work through it.

Do you know how to get the view matrix for a camera in Blender? I’ve been using Blender.Window.GetViewMatrix(), but that seems to be particular to the window, not necessarily the camera. Because the window dimensions aren’t necessarily the image dimensions, I’m getting distortion - being able to use the camera would make it easier to get a consistent framing and viewpoint every time.


camera 2 matrix - ao2 has code for this in his well commented VRM code.


I’ve been thinking about using the OpenGL method, but I’m wondering - is there any way of getting the transformations of vectors say only into clip space (I think that’s what OGL calls it), rather than the full transformation to window coords? It seems like a simple thing, but OpenGL really seems optimized for a rasterization pipeline…


Hey rocketship!
Sorry - i don’t know from the top of my head. But i guess you cannot use gluProject for that since it usually transforms vertices all the way to window coordinates. You might need to check the opengl docs though - maybe i’m wrong. Or you could ask in an opengl forum for help on this problem (?)
Good luck!