Difficult footage - identify problem tracking, camera solving


I am trying to track hang glider footage. It should be simple, but is proving difficult. I need help in identifying the problem.

Edit: any ideas and experience you might have may be helpful at this point. If this reminds you of an error you faced before just say so, even if the following says I did it correctly - maybe there was something I missed.

The end result is that the point cloud is completely off even after setting the floor and correct scale. I need a way to convince the computer this is all similar level.


In the linked image you can find a screen capture. As you see all points are far away, they move slowly and are often interrupted by the cables and the pilot himself - when this happens I leave the previous point deactivated and place a new one at a good location. Should I have used the same one? I did that once, but it didn’t help. I am willing to do it again if you have good reason to believe that should work.

I have placed many tracking points that cover from 100 to 600 frames, with a lot of overlap.

Most points are set to track by LocRotScale. The camera was a Gopro Hero 5, sensor set to 6.160, optical center is set to 1352 by 60 pixels - in case someone knows what it should be set to.

The camera solution has never been better than what you see on the picture. Notice there are two spikes in the blue line - I don’t know how to fix that, or find out what is causing it. In contrast, a normal camera solution usually went haywire, with blue line jabbing up to infinity and back like crazy. I tried to play with K settings, but I don’t understand what it does and can’t really tell whether it would help.

The track is cleaned as much as possible. The massive changes in Y and X are due to the glider turning 90 degrees and losing sight of the current view, but the tracks there are good (to my knowledge).

You can see the point cloud on the right - notice how it is basically vertical and jittered while it should be horizontal and more uniform due to the fact most of those points are only a few meters of height difference.

I have watched many tutorials but they all say the same thing and use basically similar footage. I can’t find any theory that would help me deal with this difficult footage.

I tried AfterEffects but I can’t control the points it finds, and trust me AE is just about as baffled at this as Blender is.

Solving difficult but solvable shots (meaning you don’t need to manually create or modify parts of track) usually comes down to three things:

  1. how good are your 2D feature tracks;
  2. are the 2D feature tracks actually where they are supposed to be;
  3. are the parameters that constrain the solve correct.

2D feature tracks
Feature track quality means that each individual track must be as accurate as possible. It shouldn’t wobble too much, must always track the same feature and the feature position must be stationary (for camera track). This is usually relatively easy to achieve when you have plenty of stuff in view. Problematic parts are heavy (motion) blur and areas where there are no visible features. Just crunch through these with educated guessing by watching the overall movement and the movement curve of the tracker. For motion blur, always place the tracker at the center of the streak.

2D tracker actual location
This means that the tracker in screen space must point to the actual feature in 3D scene. Why? Because the 3D location of the feature is projected through the camera to screen plane and you get the 2D location. 3D solver tries to revert this process and it is very important that the 2D location in screen space is the actual projected location.

What can throw the locations off? Main reasons are lens distortion and rolling shutter effects. Lens distortion causes rays to bend and the line from 3D point to 2D location is not straight anymore. But solver expects it to be straight. So to fulfill this expectation, lens distortion must be removed. Rolling shutter violates another solver expectation, which is that all feature trackers express their location at the same point of time, meaning whole frame is captured at the same time. But rolling shutter sensor scans the sensor top to bottom and bottom lines are captured later than top lines, failing to fulfill this expectation. Solution is to remove the rolling shutter or tell solver to take it into account.

Solver parameters
Usually solver tries to calculate all the necessary parameters automatically, but sometimes it does not give good results and we must constrain some of the parameters. These are usually the fov of the lens, the optical center of the lens (this offset is usually removed in lens undistortion step but might not be), sensor size (it is in relation to fov, one can be solved from another, but when we set both, we constain both), also possibly the general path of camera (left, right, tripod pan only etc). If we have to constrain some parameter, better be sure the value is correct because wrong value can throw the solve off. For example lens fov value written on lens or in specs is usually (99% of time) wrong. A 50mm lens is practically never 50mm, and forcing it on solver will give bad solve.

So, long story short, are your 2D tracks good? Have you removed lens distortion and other problems that violate the solver expectations? Are the constraints you are forcing on solver (if any) correct?

Looks like Gopro so the FOV is very very wide, the lens will have barrel distortion. Also it maybe already stabilized throwing off the center of frame.

It is Gopro, base resolution was probably at 2.7K. What I don’t know is whether in-camera stabilisation was turned on, and what K values to use to fix distortion. Following FreelanderTrail’s (https://www.youtube.com/watch?v=zREPFFLPMjQ) … “tutorial” gave completely different results;
Comparably values his | mine:
Focal length: ~6 | ~27
K1: <1 | 20+
K2: <0.1 | 40+

  • can I just ask which “track” you mean? I guess the camera track, but I had no control over that directly…?
  • There is very little blur ([email protected]), but a lot of shake from the glider, and lot of detail change as glider approaches features. In these cases I have noticed small (~<4px) vibrations in the track locations unless they break; does this have strong influence on the solution? How do I tell the solver of the rolling shutter? Although frame speed should be sufficient the camera still works by scanning lines.
  • Between which actions should the undistort step come in? Sorry for the stupid question, but it doesn’t look very intuitive at this point. Moreover, I have only seen a focal length setting (in mm, and don’t know if that’s meant as physical for the camera or equivalent at full frame) in the motion track window, do you mean FOV under camera object settings?

Other than that:
Is there a way to find out what exactly is causing the spikes in the blue line?
Does rotating the tracknig point do anything, and is the horizontal line on that square relevant?

Have you looked at the individual track errors for the markers? You may find just a couple stand out like a sore thumb, remove those, or hand track them and you may get better results. However, if you can’t get good parallax due to not having distant / med / close points of reference, there’s only so far you can get and then, even if you do each marker by hand, you’ll probably get too much error.

If you mean the green/red lines Colkai then yes, I have. Especially in the bottom image it all seems peachy.


I started from scratch with minimalism in mind and upper was the result. Setting keyframes A and B was interesting: leaving it at 1-31 had solve error at 1.3, while setting it anywhere else made it go as far as 350. Paradox is that the default 1-31 has almost no movement… but I thought that I was supposed to choose parts with more movement? How many frames are reccommended?

Okay, i found the individual error display (info checkbox under marker display) and just started erasing the highest errors. Two things happened:

  1. After removal of a bad tracker solving yielded a higher factor! By luck I found that I had to solve twice for it to lower (Had to solve again because I couldn’t believe what I was seeing. Turns out blender was joking…).
  2. The footage now has a nice red strip over the last few seconds. Adding more trackers even after only a few from the old setup remain, however, seems to increase error significantly.

Okay, I decided to be satisfied with a 0.6 solve with about 8 tracking points. The track is visibly shaking and the distances are incorrect, but it is survivable. Regardless, I’m marking this as solved, here was the solution:

First: Solve multiple times (Shift-S) to compensate for false solution (increased error after removing problematic track).
Second: keep removing tracks with highest error (marker display - info), add new only to appease AB keyframes

  1. Import original HQ footage
  2. set markers on prominent features
  3. track individually
  4. solve for camera, K1, K2
  5. put keyframes A and B into a position with enough movement.
    5.1 shorten the space between keyframes to substantially lower the error
  6. Solve a few times
    6.1. add new track if AB keyframes not calculating
    6.2. change tracker type on critical tracks (perspective/Loc, LocRot, Perspective…)
  7. Remove highest error track
  8. repeat steps 6-7 until error acceptably low

Again, the track I got is not very good and after turning 90° the tracker is unable to solve anything, but I got it accurate enough to at least do some basic stuff.

To be honest, a 0.6 solve I’d settle for all day long. Anything under 1.0 is a result in my book. :wink:
Unless the solve is over 2.0 you can proably work with it, but 0.6 I would class a very acceptable. If you look at many of the videos on the subject, a lot say anything under 3.0 is a win, but I always at least aim for sub 1.0