Well, I think it won’t be easy to expain my idea, because I don’t know how to upload pictures, but will try.
When You use blix camera calibration script there’s need to get some dimensions of real object. Sometimes it’s not possible to do it precisely enough or isn’t possible at all.
Some wise people invented some time ago a technique called stereo-photogrammetry. Using this technique (via sophisticated hardware and software) You can get exact dimentions of any objects having only stereo pairs of photos. All You have to know is: focal length of used camera; dimensions of used film/ccd sensor and distance between position of first shot and second one (parallel shots).
I’m an architect and often need to make a model of surroundings of space I work with, so I thought, that it could be nice to simulate stereo-photogrammetry in Blender to make my work easier and more efficient i.e. to get dimensions, that are hard or impossible to get on the spot, and moreover to have possibility of making any measurement I could need.
So I took two pictures of a table (stereopair), then launch Blendie, created two planes witch dimensions were exact dimensions of my camera sensor and distance betweet their centers equals distance between centres of my two shots. Next I UV-mapped them with taken pictures and in front of them, in the very centre of each one, I placed a camera which “f” was the same as I used to make those pictures. I had to set right relation between cameras and planes by hand – planes should exactly cover area bordered with dotted line (area masked with passepartout).
I mentioned this before, but it’s very important to make two perfectly parallel pictures, I mean that a certain specific point on both photos should lay on one horizontal line.
Now clue – I made line with one vert in left camera pyramid’s peak (I’ll call it “camera’s centre”) and second one on left plane surface. In orthogonal front view, in edit mode, I moved second vert to specific location (lets call it “point A”), than duplicate both of vertices and grab them to the right camera. Again highlighted second vert, but duplicated one this time (belonging to to pair “right camera – right photo”) and moved it horizontally to point the same specific “point A” on right photo. I got one object containing two edges (four vertices) one: “left camera’s centre – left photo’s point A”; second: “right camera’s centre – right photo’s point A” laying on one hypothetical surface. I just found an intersection of those edges using python script et voilà!
I repeated mentioned steps to find the rest of points needed to recover geometry of top of my table (55cm/55cm/45cm), and I got following measurements: 54.12; 53.88; 51.32; 56.59. Results aren’t very precise, but I think it’s quite impressive taking into consideration fact, that photos were not taken parallel enough – I did’t level my tripod at all.
I think it’d be nice to make a py-script to make this technique easier and cleaner – i.e. You could enter Your camera data, load pictures, point specific points on two photos (first one chosen freestyle, second have to be placed on horizontal line crossing first one) and script will recover it’s coordinates and create an empty. Than You can use those empties to calibrate at last two cameras and model whatever You want, UV-map it with photos… Clear photogrammetry. It’s just my dream, but I don’t know scripting at all (except GDL) and don’t have time to learn it, so maybe someone of You will try? What do You think about that?