I’ve been using Blender at the studio I’m at to track some footage. It’s extremely blurry and has ended up needing to be manually tracked. We get a decent result in blender (Getting a box to stick to a chimps’ face) but upon importing the FBX file to Maya, the camera doesn’t align up with imported image planes anymore.
I noticed in Blender the camera shift has been changed by the solve, resulting in a angled camera shape. Does anyone have any ideas on how to fix or prevent this?
I have actually never seen a camera look like that. There must be more going wrong here than just the export. How did you solve the track? What solve error did you get? What refinement did you use in the solve?
I’ve just solved the issue this morning
The solve error was 0.3623. If you place a camera and go into the camera settings, there is something called “Shift”
The shift on the solved camera was:
These settings caused the camera to look like this. If I were to set the shift to 0 on both axis, the camera would look normal - but would be looking in the wrong direction. The first 32 frames of my footage were extremely blurry on the object that I was tracking. What I did was cut those 32 frames from the image sequence and just track the rest of the non-blurry footage. It gave me a non-shifted camera that worked fine. The object was moving in an arc during its motion blur, so we just roughly hand animated the 3D object along the arc and used motion blur in comp to blend it in decently.
I had painstakingly manually tracked all the trackers every frame for those 32 frames, so there was probably something throwing the solve off.
So shift in the blender camera simulates a tilt shift or bellows lens for architectural rendering. That would really stuff up a 3d solve that expects the center of frame to remain in place. I wonder why blender would alter it as a variable?
Check this explanation of the feature when added back in 2.49
@3pointedit: The link you sent, says that you can change the framing without changing the camera position. I think this is equivalent to taking a picture with a camera and then cutting something of. It means that the optical center is no longer the middle pixel. If you do full refinement, the camera solve will use the pixel corresponding as the optical center as a fit parameter. I can image that this would result in such a shifted camera. But it is generally a bad idea, because movie clips will usually not be cropped, at least statically. They are sometimes cropped dynamically if you have a consumer camera with image stabilization. That will also badly mess with you solve, because it makes the optical center move across the image between frames.
@Blayke: Good to hear that the mystery was solved! Can you mark the thread as solved?