First of all, I just wanted to start by saying that the Blender development is more amazing than ever right now with Cycles, the Camera Tracker, etc. We have exciting times ahead of us, indeed
As probably all of you have done, I downloaded 2.62 and tried the object tracking which is a fantastic addition to the camera tracker. I have one problem though which unfortunately makes it, to be frank, worthless for the project I am working on at the moment.
The way as it works right now is that it tries to calculate the position of each tracker in relation to each other. This means that blender is not only having to calculate the final position and rotation of the object itself, but also how the trackers are positioned in relation to each other in 3D space. This is possible thanks to smart algoritms, but what isn’t possible with this method is to get a good track where there is very little movement or perspective shift. You need very large movements and rotations and tracking points at different depths to give the solver enough information to do its job.
Since we, who made the movie, in almost every case already now the positions of the markers since we, obviously have access to the object that is tracked, why should Blender have to try to calculate their relative positions to each other? Therfor, I would like to request a feature that would absolutely take the object tracker to a completely new level; namely to create a 3D object in Blender and assign the trackers to different points on this modell. Or in other terms, tell the software all the trackers position before we actually solve anything. (A concrete example; the user tells Blender that Tracker A is 5 cm above tracker B and 8 cm to the left of tracker C, and so on)
This could either be done by manually placing emptys in the 3D view and then link the corresponding empty to the right tracker, or maybe doing it by simple creating a mesh where each vertex is a tracking point and then let the user see it on top of the movie clip, lining up the shape with the tracker. Alternatively, let the tracking software try to calculate which vertex corresponds to which tracker by analyzing the movement.
Note that I am not a developer, the actual execution can probably be done in a much better way. It was just my two cents on how it could be done
Anyway, by doing so, Blender should be able to solve very fine movement in objects within a scene since it only have to figure out its rotation and location to match up the tracker points on screen.
Please tell me what you think. And if this can be solved in any other way, or is not needed due to other methods of doing it that I am not aware of, I will happily stand corrected. I just wanted to share an idea for an improvement
Thank you all the awesome developers!