Modelling from photos using the tracker in Blender?

Hi Blenderboys and girls :slight_smile:
I use Blender for arch viz and would like to know if anybody has any experience with using blender as basis of precise photogrammetry? (Modelling from still images taken from different angles)

At our school we use PhotoModeler Link
It is great because you can adjust and calibrate you camera and get very (seriously very) accurate point to point 3d models out of your images.
BUT it has the most terrible userinterface of the 21st century and that makes it barely usable.
It works by maunally (thats a good thing) placing points of corners and edges of the geometry in the photos and the programs outputs the full model.
There is also insight3d which is open source and works much the same way.

Back to the question. Any inputs.

You caan also use this: http://my3dscanner.com

Mesh you get needs some polishing - but it works.

Try the BLAM camera matching addon.

@roofoo: Yes I saw the BLAM addon :slight_smile: It is very nice and I believe I am going to use it a lot. I still miss the possibility to set points and callibrate them in different photos as I think that technique give more accuracy.

there is an os that has the tools to do it automatically for you. http://www.archeos.eu/wiki/doku.php?id=start

mingus is making a cad version useing 2.49b as the base so its not broken frequently by updates. http://www.cad4arch.com/cadtools/index.htm “2011 October and November:
I am implementing stereophotogrammetry method for Blender for precise reconstruction of 3d models from images: with help of SNAP-option “RAY” you can set 3d points in regard to two cameras calibrated previously with CAMERA-MATCH tool”

for newer blender versions check out sebastions tomato tutorials, but i believe those require video not stills. http://cgcookie.com/blender/2011/07/14/gsoc-tomato-branch-camera-tracking/

blender’s tracker is basically a libmv integration and libmv can use stills and obviously can be imported into blender. https://github.com/libmv/libmv

for newer blender versions check out sebastions tomato tutorials, but i believe those require video not stills. http://cgcookie.com/blender/2011/07/14/gsoc-tomato-branch-camera-tracking/

No, you can use stills too. In fact, on the Track Match Blend DVD, he gives an example of using 2 photos taken from different angles, and then he matches points between them. If you are patient enough you can use more points so that you can get a rough estimation of the objects in the scene. You need a minimum of 8 points for the 3D reconstruction to work.

If libmv is the tracking library (Iactually knew that) and it has the tracking of still images ability (I did not know that). We might actually get this tool in Blender with future development of the tracker?

acording to roofoo its already in. i didn’t know that. in libmv the tvr means two view reconstruction and the nview is for more than two images.

Interesting. So with tvr and nview it works roughly the same way internally? (it takes some inputs on the image and outputs 3d coordinates or some other way?)

yes. it’ll cerate a point cloud out of your matched points / track points. you then connect those to create faces. it should basically be like photomodeler.

Nice… So if it uses a point cloud it could be used to capture the forms of surfaces as well?
I would like to try it but no program with a graphical interface use it?