Hello, I’m working on an animation about a cycle journey I completed from Berlin to Istanbul last summer. I have very few photographs from the trip, especially of the people we met (at the time I wasn’t planning to make a film at all so didn’t really prepare), but I need to build models of many of these people (and I would like them to be as lifelike as possible so I can include the few real photos I have within the animation).
The problem I have is this: modeling from photographs in side and front ortho views is fine, until you try to use photos which were taken at close range or with a wide angle lens. Inevitably the images which are of a high enough quality to use as background references & for UV remapping are ones take close up as we only had cheap hand held cameras on us. What I seem to end up with is a mesh which looks like it is being viewed in perspective mode when you are in ortho, and when you switch to perspective view, the head looks like it is being viewed at an extremly wide angle, even on say a 90mm lens (almost as if it has a sphere cast applied to it). I have attached a render of what I mean. these were two children we stopped and played football with at the side of the road - I only had a front view photo.
So essentially, when you are modeling in ortho with reference images taken from a perspective wide angle lens, is there any correction you can do to the mesh to compensate for the fact that your photos were not taken with a longer lens that is closer to the ortho we model in?
all the best and happy christmas,
you do projection paint? That SHOULD take in to account the stretch if you do the UV map correct.
hmm not quite what I mean, sorry I’ll explain myself again:
is there any way to compensate for modelling using reference background images which are taken in the real world with a perspective (usually quite wide) lens to build a mesh in ortho (ie non perspective) space. There is inevitably a lot of distortion in the mesh shape itself - was wondering if people just ignored this, or always redrew their reference material as orthos?
Not sure, but I think there’s a “lens” script in Gimp. Or perhaps with the “perspective” tool, you could try to correct the distortion.
Just a suggestion
it will still look off if you “fix” it. There will be more resolution on one part & less on the other. But it would be best to correct the texture first.
You’re right, but he wanted it only as a reference for modelling.
In such cases, though, the normal practice is to take photos of as many different angles as possible - with a turntable or something, iirc.
the turntable idea seems the most logical for modelling heads - i find it odd though that blender uses the background image in ortho front and side, but if you press 4 or 6 to move round in steps you can’t use one. As far as I can see you have to have a camera position setup to use background images in any place other than numpad 1,3 & 7.
but with the problem that my reference material is bad in the first place, I’ve used the lens distortion tool in photoshop, but I’m not sure that’s solving it - to take it to extremes - if you were so close to someone’s face that you couldn’t see their ears, no amount of lens correction would bring their ears back. I guess the solution is just find better reference material in the first place or learn to be a better modeller!
Sorry for the late answer.
I think there are a lot of links - on the Texturing forum - to sites with (normally free) reference images.
If not, I could post some: while I don’t do character modelling, I’ve been through a few of them looking for images suitable for “billboard” humans. It’s been quite some time, though.
I use lens distortion correction plugin for Coral Photo paint. In your case, correction amount need to be calibrated for particular camera / lens to subject distance; take one image at close range, and take “corrected” distortion free image from far to guide you.
thanks for all the ideas guys, ridix’ suggestion looks like a great way to get the high image quality of a closeup and the geometrical accuracy of a long shot. in the long run it might be interesting to wait to see how this feature turns out http://www.youtube.com/watch?v=J1-5Y4RAZR4 and http://www.youtube.com/watch?v=ZQ4ghCNGOxA
Here’s a quick view of the model I ended up with…
its much more rectangular than it should be. The discrepancies with the images are really obvious when you’re working with something that you know should be a perfect sphere and already has topology marked out.
This explains why its always such a struggle to fit the eyeballs into a part of the face which is much flatter than it really is due to the image distortion!
What I’m thinking is - if I use this object and add a shape key where the vertices match the position of a perfect sphere, if I then use it as a MeshDeform to correct the distorted mesh of the characters I’ve built. But then again this shape is the specific deformed shape of a 35mm lens - do I need a whole array of shapes? is there an algorithm that could be coded in python - a kind of 3D equivalent of the lens correct found in photoshop, gimp etc?
distortsphere.blend (154 KB)