Hmmm… Chrome says stereophotogrammetry is spelled wrong. Go figure. Anyway, I recently stumbled upon a software (while searching the great instructables) and happened to find a program called 123D catch (Link). 123D catch, in case you don’t know is a 3d stereophotogrammetry program. Photogrammetry (since I know you’re wondering what it is), is a method of making 3d models based on a series of photographs. When I went to said link, the first thing I saw was the autodesk at the top so I thought “crap, another really expensive software by autodesk.” But, I was surprised to find out that this program is actually free (as of now). But it got me thinking. Is it possible (well it is, but is it feasible) to have stereophotogrammetry in blender, a open-source software. It seems like it isn’t too unlike the motion tracker.
I requested this as a gsoc project- there is also insight3d and ARC 3D
there are already opensource versions. http://www.archeos.eu/wiki/doku.php?id=start and blender is part of the distro.
As it happens, libmv is already planning to do this.  Whether this is actually implemented in the library or not, I don’t know.
Sergey Sharybin would be the man to ask.
I actually did half a year back or so and as mentioned already he said it is planned to be a part of the libmv to do this.
From the roadmap, Manual setup seems to be part of v 2.0 while dense models from several photos seem to be of v 3.0.
Libmv is yet to be 1.0 so things might have changed or change and I may not have understood the roadmap descriptions correctly but it seems as it will come to blender eventually.
Personally I would love the manual setup like photomodeler or insight3D.
PS photomodeler uses a special technique where you print out a paper with a pattern, photograph it with you camera and the program then reverse calculate the distortion from the photos so the final model is very accurate. Pity that the program has the worst interface in the world </wish>
So. What about manual stereophotogammetry. Would it be possible to take a video and use the motion tracker to track different points on the object, then use those (now 3d point) to recreate a mesh, then using the video to create a texture?
You can do that in Blender right now That’s called 3D scene reconstruction, you can find some good tutorials on Blendercookie (The tutorials are about camera tracking but they also cover reconstruction).
What makes stereophotogrammetry different is that you have unassociated pictures.
Is this what they used to call Photofly?
Yes, that’s what Photofly does (did?).
how about BLAM script
but you need to add some perspective lines then it can rebuilt your 3D object!
Well, it’s a thought. Although, as I understand it, blam only recreates cuboid or cuboid based shapes.