Issue with tracking object

Hi, I have an issue but I dont know if it’s a bug or juste something I miss something.
I track a face, and would like to have some of the tracking point on a object track and some on a other object track. So I made “hard track” and “soft track” below the camera track with is empty (no camera movement in the shot)
I put the same camera setting (sensor with and focal length) but when I solve the tracking the point of the two object or not at the same place, but they should.
This is some picture to help you enderstand what I’m talking about.!

Did I do somehting wrong ?

I can’t help with blender tracking as I have never used it (I use Syntheyes), but I think your problem lies in trying to solve two separate sets of features with no connection between them. I can’t quite understand if camera position is locked and shared for both of them or not, but even if it is, it seems that you have quite a long lens and if points are not widely spread spatially and in depth you might get very inaccurate distance from camera. I’d suggest including some solved points from another object (if it is possible in blender) and constrain the solve using them as guides. But first try to get proper solve for at least one set, both images show widely spaced points that don’t form the shape of face as they should. RMS error solver gives you is useless even if it is small if points are not where they should be.

Oh yeah I think I get what you mean, but I dont think you can include solved point from another objet, you can copy past but I think it’s just the 2d track point that are copy not with any solving info. Maybe I need to track with something else than blender but I fell that i’m close to do it here. Maybe a trick in the modifier and constraint might do the job.
I did another try on another shot with a better result with deph like you said and the hard track is way much better and work fine. I struggle puting some soft track on the face to drive bone and deform the face after.


I think the problem is about understanding what object tracking is.
If I get what you are trying to achieve :

  • You want the “hard tracking” to track face position
  • You want the “soft tracking” to track the mouth and eye movement… ?
    Is that correct ?

Because if it is, it’s quite impossible (at least, by this way). Let me explain :
Considering that a video in only “2D”, therefore tracking a single point in 3D is theorically impossible. BUT Considering that an object is hard and solid, tracking multiples 2D points on a 2D video, allows, after some calculations (called “solve” in your case) to reconstruct the 3D tracking of the object. But that’s only because the object is hard, and not soft. That’s only because the software is considering that even if 2D points are moving, the 3D matching points are not moving relatively to each other.

To sum it up, if you want to track the face, using only the “hard points” is good. But avoid using soft moving points. Points wich are moving relatively to each other are not trackable with this method.

++ :slight_smile:

Ok thx I’m think your right but I thougt even if the soft track point could’nt help the solving of the shot (solving only with the hard track object) they can be in the 3d space only to move some empty or bones. But it mean the soft track would be the child of the hard and wont affect the solving (since if the mouth or eyes move a lot it can perturb the calculation of the movement).

Yes I understand, but this “child” thing is not possible. Soft points are 2D. Going from 2D to 3D is “solving”, and in this case not possible as they are soft.

But I understand that it’s hard to understand. Our brain is so “smart” that we understand the position of the 3D moving points, even if it’s a 2D video. But when you come to calculations, it’s not so easy ^^

If you really want to achieve it with Blender, you have to solve only the hard points. Then, for moving mouth and eyes, you have these solutions :

  • Doing it manually (with some bones or shapekeys, etc…)
  • Using drivers, and landmarks as 2D points, and compute some distances to each other, to drive mouth opening, or eye opening, etc…

Other solutions would exist by merging 3D and Deep Learning (which is actually my job :slight_smile: ) but that would be out of the subject, as this website is for Blender purposes ^^

++ :slight_smile: