Camera tracking multiple videos

I’ve been playing with motion tracking in 2.6 for a few weeks now. I know how to do it for single videos, but was curious if it’s possible to camera track multiple videos and add them one after another. For example, if I filmed two videos and converted each to a png sequence, I first wanna load the first video then motion track it and add some 3D to it. Then maybe add a black screen or image or something else, followed by the second video which I also motion-track and add some 3D to.

I have two issues here. One is that the Background Images feature (when you press n in the 3D view) which is used to show the video after you solve camera motion cannot be keyframed and animated, and it shows the same video globally. I looked for a way to set each camera’s background individually, but it doesn’t work. Supposedly selecting the camera and adding a texture to it should act as a background. Only way seems creating a plane and parenting it in front of the camera, but then I can’t adjust it to be the exact video’s size, as if it’s not scaled exactly the tracks will not match the correct spots.

The second issue is that I’m confused whether or not the camera tracking system supports multiple videos. After I solve camera motion from a video, it sets up a tracking scene in the 3D view. But if I would load another video and track that as well, would Blender know to set up an entirely new tracking scene with another camera containing a new set of tracking points?

I could merge my videos together and track the whole thing, but that would make things very hard if I want to add something between the two which I can easily edit (like some non-tracked 3D). Also, the part at the end and the part at the beginning might not solve well with the same camera settings, as the two clips could be from different areas and angles. On the other side, if I track and 3D-fy each video in its own blend file, there’s no way to export everything in a single render at once. I’d probably need to render each animation then merge the resulting video files in a separate blend, which could cause me to lose quality and make the process more complex.

So a way to fix this would be helpful, if anyone knows any. If not I’ll probably go with separate animations. Perhaps linking / appending blend files together could do the trick.

Am I missing something here? You do have each video sequence right? Why aren’t you using Video Sequence Editor to just stitch those video together?

I haven’t tried it like that yet. From from what I know you load each sqeuence in the Movie Clip Editor, and the place where you stitch them in is the Video Sequence Editor. Can the clip editor use that stitch or just the videos you select in it?

I also don’t know if I can / should track each sequence separately, and generate a new camera with a set of tracking points for each. Like I said the same camera data might not work for each video, and I don’t know how the camera tracker handles this.

Ok… I tested this quickly, as I’m not going to motion track two full videos for this purpose. The Movie Clip Editor allows selecting multiple videos, and stores the individual tracks for each one as well as camera data. I didn’t test a real scene reconstruction since I only added 8 tracks on one frame to see what it says, but setting up the tracking scene for each video didn’t create two cameras.

I can however create two cameras manually, set one as active, solve the first video, select the other and set it as active camera, then solve the second video. This appears to put each track into its own camera (setting each camera’s focal length accordingly) although I’m not sure if this works in practice with a finished track. I also don’t know how I’d go around the “set as background” option, since it’s a global setting which can’t be animated.

I finally figured this out, thanks to the people on the Blender IRC. The easiest way to do this is to track each video in its own scene, then render the scenes in order. Because each scene uses its own render nodes, the video backgrounds will not interfere with each other either (the global Background Images option I mentioned earlier is for display purposes only).

After camera tracking the first clip and adding your 3D to it, create a new empty scene, go to the clip editor, load your second video and track that too. Once you solve the camera motion, a new camera with a new set of tracking points will be added, not interfering with the one in the other scene.

To view your track in realtime, press N in the 3D view, and under Background Images add a new image and set that to the video you tracked (this doesn’t influence the final render like I thought). Note that when switching between scenes and viewing each clip, you will need to toggle between these manually, since like I said they’re a global setting and not scene-dependent.

Finally, to render the whole thing, create another scene but don’t add anything to it. In this scene, go to the Video Sequence Editor and add each of the other scenes in correct order (Add -> Scene -> MySceneName). Set the length of the timeline to the sum of the other scenes and you can now render entire whole animation at once.

So in case anyone else has the same issue and finds this, that’s how to motion track multiple videos in one Blend file. Nicest thing is that by using scenes, you can reuse existing content (eg: share mesh data) which it things a lot easier if you animate the same mesh from different angles / locations via multiple videos you filmed in real life.