I am working on using motion tracking with 3d helmet. I used this tutorial to help;
before putting my helmet in I wanted to test it out with the standard cube.
as you can see, I cannot rotate to the left and right. It’s true my solve error was 16 and I guess that’s pretty high, but the rest of the motion seems right, just not the L/R rotation. This leads me to believe that there may be something missing that I didn’t do?? any advice?
I had a better look at the tutorial you used as basis and your clip. If you use moving camera and static object, forget what I wrote. That applies to moving object track.
Currently your issue seems to lie in bad solve, it is definitely not correct. It seems that all the features you tracked are on the same plane, it is not possible to get a 3D solve from points lying on single plane or close to it. You need to spread out the points in three dimensions.
Btw, what is your solved camera FOV and sensor size? Did you set these manually or let solver calculate for you? What does the camera movement look like? With correct track, camera should make a round motion around the empties when hand turns.
I see, so if I did the same thing on a face (3-dimensional) maybe I wouldn’t have the same problem. my hand is too flat.
i have no idea what the camera FOV and sensor size is. I didn’t set them manually and I am not sure if the solver calculated it for me. When you say camera movement, do you mean the camera I used to record the footage, or the camera inside of the program. The Camera is pretty stationary although I was holding it with my other hand when recording.
Sorry I don’t know all of these terms; I’ve been trying to study this but I fear this is beyond my understanding.
Yes, face has more “depth” and solving will be more accurate. Matchmoving is based on relative movement of features and if they all lie on the same plane, it is hard to deduce the correct perspective. This is why solve might at first look relatively reasonable but still be very wrong. Always look what the solved camera does and if it makes sense: does the virtual camera move in a way that actual camera moved (is direction the same etc).
Ok, no problem, solver can calculate these for you, but if solve is wrong, so are these values. Blender presents the fov value through combination of focal length and sensor size. If you know what the focal length of your actual camera lens is and what are the measurements of sensor, you can take a look if solved value is relatively close to it. It will never be exact because numbers on lenses are not accurate and lens distortion, solve errors etc all have effect on calculated values, but if you know that your camera equivalent focal length is 35mm and solver gives 100mm, something is clearly wrong. Also, keep in mind that focal length and filmback (sensor size) go together. Smaller filmback and smaller focal length produce the same image, this is why it is useful to know the 35mm (full frame) equivalent values. For example a handycam with small 1/3" sensor might have focal lengths in range 4-10mm, but it still givest the same field of view as normal lenses on full frame sensor. For full frame this small focal lengths are very-very wide.
By movement I meant the movement of virtual camera inside Blender. Virtual camera movement must make sense in the context of video and how the real camera moved.
thank you, it worked! I didn’t use my face yet or my helmet. I took a ball, stuck toothpicks in it, and then put small clay balls on the tips of the toothpicks.
I had 8 markers and a solve rate of 2.35. It came out nice. I will look into my camera’s focal length and sensor size to ensure an even more accurate tracking.
I will now attempt to import a helmet that someone else created for me, wish me luck and thanks again!