Compositing into live video

Ive got most of the basics down, and i think that i might be ready for my first test. i want this one to be simple though, but i want the 3d object to move behind a real person or object. Could someone please make a full video tutorial showing me how to simply add 3d objects into real video with no greenscreen and just composite nodes. THANK YOU ! Any help is really apprieciated.

i want this one to be simple though, but i want the 3d object to move behind a real person or object.

This sounds like you need to use render layers, and set up a 3d object that is animated to be a mask for the person/object in foreground, then another layer to animate the ztransparent ground plane and buildings ( shadow catchers) and the actual inserted animated object.
Rambobaby’s threads would be a very good start. Also, look for anything about rotoscoping with curves - that might help.
I don’t know enough to guide, I’m afraid.

do you solved your problem ? I’m facing the same one …

Okay, what I think you guys might want is AR (augmented reality) Blender. This distribution would let you feed in live camera data to a texture plane in the background, and would output a live composite from the combined data (3D/Live Action). For the most part, it worked, but I lacked the knowledge to fine tune it at the time.
Now it seems it is no longer available. The link on the Blender site no longer contains the ARBlender info (it was being developed by a university). I still have a copy, if you’d like to PM me, I’ll send it by way of yousend it or whatnot (fairly big file).
You will need to plan your live shots accordingly, as you need to implement tracker devices live, and you still may have to accomplish some degree of trickery to get your person in the foreground, occluding 3D objects behind them. It is great for live set extensions, though…

This sounds like it’d be real difficult to coordinate, live feed with rendered images :eek:.

Currently I’m working on my first composite using Voodoo camera/motion tracking, and simply output my live video as an image sequence using Virtual Dub (Blender can do the same, I believe), then set it up in the Compositor with an Image/Sequence node. My 3D objects are put on Render Layers and placed in a perspective-matched mock-up of the live scene, comp’d using an Alpha Over node and the RL’s Alpha channel. One 3D object is a masking element replacing an object in the live scene’s foreground (sort of a “hanging miniature” effect), and I’ll be adding a 3D figure as well.

Motion-matching has been difficult since it’s a low-res and rather noisy, fully-hand-held live shot (I wanted to try a “worst-case” situation), but with some careful object placement and using the Voodoo data to help keep objects synched with the camera jitters, it’s starting to work out real well. The Compositor’s been very helpful for positioning the “replacement” object, since you can switch back and forth from Viewer nodes with and without the composite.

The Compositor accepts “Movie” Image inputs as well. I was using an Image Sequence because I’d done some processing on the raw video using a few Color adjustment nodes, and ran out that composite from Blender as individual frames so I wouldn’t have another generation of video processing.

btw, I’m using “live” to mean “real world” recorded video, not “happening at this very moment” :wink:

Is voodoo working well for matchmoving? I’m thinking about using that instead of Icarus.

So far I’d give a qualified “Yes.” Qualified in that the camera motions are well-plotted within their scope, but the estimations of focal length I’ve found to be significantly inaccurate. Better by far to record the lens specs at the shoot and use it in Voodoo.

I’ve also had some extra work getting a good perspective match in Blender. Not sure if this is common, but Voodoo seems to reconstruct a very different space in its point cloud than that of the actual scene. In the end I had to keyframe some additional camera moves to fully match the pan & tilt I did in the video.

I also used my BLenses script to set the focal length of Blender’s camera, which helped match the zoom I also put in the shot.

Since this is only my first trial of Voodoo, I can’t say if the issues I came up against would be the usual, and they were all surmountable with a bit of extra work.