# Perspective in blender

Hi all!
For a project I’m doing I need to recreate a 3d scene from images taken from blender. I use the rendered sequence with perspective for the textures and I use the z-buffer to get the depth information. In this case I use the orthographic images. Now my problem is that from the depth information I create the 3d space, but to correctly apply the texture I have to do the perspective correction of the 3d data. Since blender does this too (when I select or deselect the orthographic projection option), I wonder if anyone knows the exact formulas that blender uses to do this calculation. Now i’m using these formulas. It seems that the values I use do not allow for a perfect fit of the images. I’ve tried searching in the source code too, but without luck. Does anyone knows which formulas I should use to correctly convert orthographic images to perspective ones?

thanks

So no one can help me? I just need to know how to modify the orthographic images in order to superimpose them to the perspective images created with blender.
Thanks again!

If you have the distance from camera (z-depth), the screen coordinates (x, y) and the viewing angle (lens), then getting the 3d coordinates in camera-local space should be simple trigonometry…

Actually what I need is to tranform the orthographic coordinates into perspective coordinates. The basic step is to divide the x,y coordinates by their z-depth, then some other multiplications should be done. Those are described in the link posted above. I tried various values for the focal length, but some are good when the objects are far from the camera, some are good when the objects are near, so I think that Blender does some calculations. I’m looking exactly for this, I would like to know how Blender converts the orthographic coordinates into perspective ones.
Thanks

Ah, I see, try
(px, py) = (xw, yh)/(2zsin v)
where x, y and z are you known coordinates and (w, h) are the dimensions of your canvas and v is the viewing angle.

So x,y and Z are the coordinates of the 3d space (I get those from a blender render using z buffer right?), w and h are the dimension of the image and v is the field of view as specified in blender?

PS: to be more correct, I get the actual x and y by using the scale value from blender

Yes, you correctly understood my intentions
Does it work?

I cannot test it today because I’m on a different computer, I will try that tomorrow or at worst the next week. Thanks a lot for your help for the moment, I’ll keep you up to date!

Ok, cool, hope it turns out to be useful

Just one question, should x and y be centered? I mean, if x goes from 10 to 30 BU, should I subtract 20 (x coordinate of the image center) to the coordinates?

I would think not, since your original coordinate is an image coordinate, but you might have to. If so you should add the centered x-value to half the image width in the end, I suppose…

Hi, I tried using the formula you proposed, using an image of 768 * 575 pixels. It does not seem to work well, if I remove the 2 it’s much better, but still it’s not perfectly superposable to the blender generated perspective file. The x,y and z I used are equal to the blender units in the image (i.e. in orthographic I have a scale of 50, so x goes from -25 to 25, z limits are 0 to 100 so z goes from 0 to 100). also, I have a lens of 35, which is 49.13 in degrees, and that the value i used in the sin.

I’ve also found thisdiscussion, wich deals with a similar problem. In the blender source code I found a couple of matrix, perspective_m4 and orthographic_m4, which are similar to the ones on the link, but add a few things, they are like this:

perspective

nearClip * 2.0f/Xdelta 0 0 0
0 nearClip * 2.0f/Ydelta 0 0
(right + left)/Xdelta (top + bottom)/Ydelta -(farClip + nearClip)/Zdelta -1
0 0 (-2.0f * nearClip * farClip)/Zdelta 0

orto

2.0f/Xdelta 0 0 0
0 2.0f/Ydelta 0 0
0 0 -2.0f/Zdelta 0
-(right + left)/Xdelta; -(top + bottom)/Ydelta -(farClip + nearClip)/Zdelta 0