The 2D tracking feature looks like it will come in very useful for a lot of tasks, one thing I’m thinking about (for the future of course) is motion capture, either for basic facial animation (one camera track), or perhaps if libmv/ blender allows multiple camera track, which would allow for a 3D motion capture.
Well that’s just one of the possibilities, like the above videos show there are many.
One thing that I’m wondering about though is the integration of the tracking, at the moment, and I understand that it’s still in heavy development, it seems like there is no clear way of using the tracking data effectively in a workflow. I use the word clear because it’s feels very disjointed and slightly complicated to get things to match up, especially if one is working on a complex project and needs to integrate a 2D track into a scene.
Now I’m sure things will change quite drastically, but in what (if currently known) direction, will a 2D track still be linked to empties in the 3D view, or will there be an option for the user… send the tracking marker data to a node(s), allow manipulation in a 2D window, such as the movieclip editor itself, or send the data to the 3D view (hopefully via a slightly easier method, aka not having to scale to massive scene sizes for HD video etcetera etcetera)?
As I’m writing this I have had an idea about the workflow, well getting the tracking marker data to an empty in the 3D view at least, would it be possible to have blender create & link for each tracking point an empty in the 3D space? The empties could be generated with one click after a solve has been completed, and if the user wants to delete an empty, or a tracking marker they can simply delete the marker and that would remove the linked empty.
The naming would also be good as it could be automatically named Marker001,002,003, etc so seeing which marker belongs to which empty would be quite easy, even easier if there were some kind of container in the outliner which contained all of the tracking markers inside a tracked data block (could be just a group, but called solve001, 002 etc), that would a user could go in and using the parent in the outliner feature just drag & parent objects in the 3D view to specific markers.
Also, is 3D tracking & solve a target for this Gsoc? If so up to what stage, camera generation, point-cloud generation, automatic camera focal distance?
Sorry for all the questions, and thank you for all of the work so far, it’s bringing me a step closer to once less external application in the pipeline!