compositing with live footage

alrighty now i need to be able to composite 3d animation with live footage. now if i set the world to be the video im compositing with it works just fine but the video quality is greatly decreased. all the settings are as high as they go. if i use the sequence editor then there is an outline around the mesh. so if i use it as the world is there another render i could use that would get me better quality? or is there a way to make the world alpha so when imported into vegas i wont have to croma key and when i do that sometimes part of the object get keyed out too. i really need someanswers please.

Try the compositor / node editor - you can get better compositing results there.

I tried AlphaOver (add–> color–> AlphaOver)
and the mix /screen overlays and got some okay results -you might want to think about putting cutouts of your scene in there - maybe camera mapping like colin levy’s stuff - it will allow you to interact with your scene more. Shadows are important too. (see the camera mapping tutorial)


here’s something that might work for you: in the sequence editor, add your live video clip into the timeline. then add a “scene” (which will be your scene that your animation is in), and then select both of them, and add, effect, “alpha over”. this will make your background transparent (oh, also make sure your world color is black), and have your 3D object superimposed (with no borders) over your live video.

did i explain that in a sensible way? i never know…

thanks guys for all the help. i found out another way by searchin a little deeper. i just need to render the animation as and quicktime with the codec set to png and the depth set to millions of colors + then import into vegas over original video right click the animation media tab and under the alpha channel choose straight (unmatted) then the video and animation are joined together.

i explained all that incase someone else has this problem

Rendering like that is a really BAD, BAD, BAD idea. AndyD tried to tell me as much on this forum several years ago but I didn’t listen because QT was such a convenient format. About a month later, more than 14 hours into a render, lightning struck, the power flashed, and the QT movie container was destroyed or at least left in an incomplete state. I don’t know how to fix such things programmatically and I’m not even sure that it can be done, so all that render time was lost.

Another drawback to QT is that it only supports premultiplied alpha channels which means you get a halo around everything if you don’t export to software that has the ability to remove color matting (the option to include a Straight alpha indicates that it may support it now so render from Blender with the “Key” option enabled and yu’ll know for sure if you get ugly aliased edges when overlaying the footage). PNG format is great too but Blender only supports rendering as 8bpp .png even though the format itself supports 24bpp. What this means for you is that any levels adjustments made to the image after rendering from Blender will result in some fairly hideous banding.

Your best bet is to render as an OpenEXR sequence with the Half Type option enabled. If your chosen software doesn’t support OpenEXR or at least have a plugin to support it then you may want to consider another package.

well came home and made a video tutorial on the process i mentioned above.

I just use Blender’s Backbuf option in the Render Settings panel which lets you select an image to render as the background. This is different than creating a world texture and works better because you don’t have any camera distortion.

Renderer10 has a good point :slight_smile: