I know that the match-move data from a video can be brought into a 3D scene to use as camera motion, but is the opposite possible? Can you somehow use the movement data of a keyframed camera object to guide the motion of a video or image in the Compositor?
It seems like this would be a good tool for motion graphics compositing in general (at least when the goal is to combine 3D with video), because it would avoid having to re-track the motion in Blender or in another program post-render.
Hey! So I’m trying to match-move a flat video into a blender rendered 3D scene in the Compositor after already having rendered the 3D scene. Normally to match-move a video into another you would just track points in the original video, and use the motion, scale, rotation data from that to make the second video follow the motion of the scene. What I’m asking is, when the original video was a 3D scene that you created, is it possible to take the motion data from the active camera and use it as data to track the motion of a flat video that you add in the Compositor after the fact. Does that make sense?
I think you can track the video, then instead of using the newly tracked camera, you use the old camera of the 3d scene to render. I don’t really if it work, but you should try.