Convert model made in ortographic mode to perspective


I’ve found this topic on this forum already, but only in dated topics.

I’ve made the big mistake of making my human head in ortographic mode for a game - based on a picture - and it does not work well in perspective (as expected) - the ratios are all pretty messed up.

I’ve found a particular post on this forum, which had a script, that might be able to help, however it returns a 403 for me.

My question is - could anyone give me a suggestion (other than redoing my model, which is probably the answer, but here’s hoping) in finding another such script that might be able to do something like this (based on parameters, like focal lenght, etc…), or perhaps other ideas to change such a model to look ok in perspective?

Thanks in advance.

Can you provide the .blend file? It’s an interesting problem, and I’d like to test a possible method or two before making any recommendations (or trying to describe them). :thinking:

I don’t have exact precision in mind nor can I do any scripting, but am thinking of certain modifiers and mesh editing functions.

And welcome to the BA forums!

I don’t get how a head modelled in orthographic view can result so bad because of this, the opposite is more often true.
Could it be instead that in perspective view you are looking at it with a too short focal lens and so you get a distorted aspect?


I can’t share blend file unfortunately, because it is a commercial project, and not allowed to.

I’ve made some screenshots, maybe you can give me some advice in making it look less…streched? They are both made in blender, in the game engine it looks very much like the perspective one.

Sorry I can’t share a file. If you can’t help me, that’s ok, so far I’ve ended up trying to correct the model I have to look better in perspective (pulling ears out a bit, and the backside out).

I think the problem comes from me working with a reference image (made with a camera) and that causes the disortion (since that is not ortographic, but perspective based on the camera). I don’t know what’s a good workflow to avoid this, I would appreciate some advice for the future :).

Try setting your Focal Length in the View panel to something higher, like 50mm, and see if it still looks too distorted before you “fix” the model.

I’ve set it up to 50 already (heard about that too), but it still looked disorted.

No worries, just might have been quicker that way.

My first idea was to scale down just the central ‘region’ or volume of the head in edit mode, while experimenting with different Proportional Editing falloff curves. In theory this could reintroduce a more correct shape to the mesh that was lost during orthographic modelling, since the camera’s lens would have ‘inflated’ objects more in the middle than at the edges of a reference photo.

It would just be an approximation of course, but may be good enough?

As for workflow to avoid this, I would just model in perspective mode - esp. when trying to achieve anatomical ‘likeness’ of an organic shape from a photo. Modelling in orthographic mode might be best reserved for mechanical, hard-surface objects or when the output is something like a game that will be viewed the same way.

Hope that helps!

(Edited to correct the direction of the scaling operation mentioned above.)

Have you tried using a Lattice modifier around the head and scale/stretch out the back of the head? This should proportionally scale the rest of the model so it looks ok.

This is absolutly irrelevant you have to look at yout Camera

As you can see Suzanne seems to have no ears in my UserPersp but on Camera…everything is fine.