I’ve recently written an addon that may be useful to those of you using Blender’s camera tracker for VFX, or those interested in photogrammetry.
I’ve been attempting to reconstruct sparse, low resolution images where a normal automated pipeline fails. The usual fix is to take more photos but I’m unable to do so and must work with what I have. This is where Blender’s motion tracker can be used to create manual feature matches and sparse 3d reconstruction to be used by a dense reconstruction software such as COLMAP or PMVS.
After reconstruction, the point cloud is imported directly into Blender. This allows you to position objects accurately within your scene using the tens or hundreds of thousands of new points. You could also use Meshlab to generate a mesh and associated texture from the point cloud and then import this into Blender. This would give you textured geometry that can be reflected or simulated in your VFX shot with minimal effort!