Track first, undistort later?

The procedure in many 3D-tracking software is to undistort the shot first and then render the undistorted version before engaging in any tracking. I haven’t found any info about this particular thing related to Blender, and judging from the various tutorials out there, people seem to do it the other-way-around - that is track first, undistort later. To me, this is very strange as most 3D-tracking software (including Blender’s ‘libvm’) - as far as I know - “assume” that the shot is undistorted while solving it. Is Blender an exception to this?

Thanks!

In the tutorials I’ve seen on syntheyes, it is also track first, undistort after - IF there is fairly minor lens distortion. You could try both ways and see if it affects your MSE. I can think of arguments for both methods, but it might depend on the particular camera solution algorithm that is used by the software that one way or the other produces better results.

But, as I said, I think the degree of lens distortion might be the more important factor.

There are 2 stages of the process:

  1. Tracking.
  2. Solving camera motion.
    When tracking it’s better to track points on original untouched footage. This way no data are lost.
    Then we let the algorithm solve the motion.
    If we set some undistortion values (automatically or manually) the solver will simply translate the tracked points positions to new circumstances.
    So the best approach IMHO would be:
    Track original footage, then undistort and solve.

My workflow is:
Track, solve original to see the error value, then if I get something about 1, or 2, I undistort and solve again.

In Blender you don’t have to render undistorted version of the footage. You can simply force the “engine” to take distortion into account.

@ @blenderjourneyblenderjourney: Yes, a minor lens distortion might not make a huge difference. But then again, it really depends on the shot.

@ Bartek: So libvm automatically takes the distortion into account while solving? Are you sure about this? If so, then that is pretty cool!

@Daccy: I don’t know the code. I only base on what I see. I did a simple test: Tracked original footage and then played with K1, K2, K3 values (undistortion). What you see when you play with it is not only changes of the footage but also positions of track points.
That leads me to conclusion that solution is based on tracked points that were “translated” to match undistorted footage.

Thank you, I will try that later. :slight_smile: