Isn’t there any A.I. tool, that if I input/upload a video footage, it can automatically extract the camera movement information that can be used in Blender? I.e., I don’t want to deal with this manual motion tracking thing.
To human eyes, the camera movement of a simple video footage is so obvious. But Blender’s motion tracking tools don’t seem to figure it out easily, and get easily distracted with any small moving object. Actually it seems to get confused where nothing in the video, except the camera, moves. I have wrestled with all the blue and red lines but in the end, I could not correctly position the floor. It’s time-consuming, and with the recent advent of A.I. technology, I feel like this can easily be done by A.I…
I have been looking at this for quite a bit. There are plenty of machine learning tools which require the computation of camera positions. Be it NeRFs, self-driving cars, … and plenty more.
To my surprise, all of the ones I have looked at are relying on conventional SfM (structure from motion) algorithms.
Photogrammetry also seems to rely on conventional algorithms at this point.
I have seen some approaches which seem to work with tracking itself, but not with the computation of the camera positions directly.