this is from BlenderArtist sundialsvc. Hope it’s what you need. I’ve never tried it:
The “Sequence Editor” is really a video editor. Its purpose is to take individual “strips of film” that have been separately rendered, and to then “string them together” to create a complete film. Another very important purpose of this tool is to “composite,” or combine, one or more strips that each contain separate layers of material into one.
(Note: In this context, I use or abuse the word “layer” to refer to “strips of film,” not the twenty-odd buttons on the edit screen.)
Let us consider for example some nasty space-fight sequence between two players, call 'em “Luke” and “Darth,” who are fighting on some narrow bridge beside a yawning reactor-shaft next to an improbable window with starships floating outside. (Pick any scene from Star Wars…) How would one do that, and more to the point, do it efficiently?
How would one do that in a way that accommodates the inevitable changes, and that preserves the considerable investment in render-time without demanding that everything be generated all over again? How, indeed…
Well, the process of creating the movements for Luke and Darth are the subject of a separate tutorial, on the Action Editor and NLA’s. So let’s pretend that we’ve already choreographed those sequences, lit the scene and placed the cameras. We’ve done all that, and the question is what to do next.
What we obviously cannot do is to set everything up, push the “Render” button, and check back in a week after our poor computer has laboriously ground-out every single frame. (Well, I say “obviously,” but quite plainly it isn’t obvious, because that is exactly what most beginning animators do!) Any change, no matter how slight, would force the poor computer to do everything all over again, and vast amounts of the work are repetitive.
What do we instead? We break it up into pieces. Each piece contains only what it must contain. Then we re-use those pieces. Let me explain …
We’d probably start by shooting just one frame of our “abyss” from each camera position, because the cameras (let us say) do not move. Since the background does not change, we need only one solitary frame.
There is nothing at all in the camera’s view except the abyss. And there is a “hole,” perhaps a flat plane with Alpha=0.0, where the window for the starships will be.
We output these images to a file-format, such as .TGA or .PNG, which supports “Alpha” and we have the Premul key pressed. Also, we press the RGBA button on the render-page so that our output-data includes Alpha information. Every layer must include it.
(As an aside: “Alpha” refers to transparency information. An opaque object has Alpha=1.0; completely transparent 0.0; ghostly is around 0.5. This is the source of “masking” information which allows layers of image data to be combined.)
(Note: use the “search” button in this website, say on the keyword “premul,” to find lots of prior threads! And, yeah, I’m still-workin’ on a tutorial.)
Now we shoot the scene of the starships, from whatever camera position works. (They won’t change significantly from either camera we use.) For simplicity let’s say that just one frame is needed; the starships ain’t moving. Keep it simple = one frame. We only care about the small region which will actually be visible through the window.
Now we shoot Darth from Luke’s camera, and we separately shoot Luke from Darth’s camera, and maybe we shoot from a third camera that covers both. Each actor performs his entire routine, and there’s nothing at all on-camera except those actors. Once again we shoot to PNG or TGA’s, and we have Premul set.
Each actor performs on a plane which has OnlyShadows set in its materials, so that the plane isn’t there (it’ll be part of the background), but the shadows that the actors cast upon that plane are there.
Now, for each strip, we composite the actors to appear in front of the background (using the AlphaOver filter), and for the background to appear in front of the Starships, who will be visible “through” the window. This is very fast because it involves no separate, repetitive rendering of the background or of the ships. The shadows neatly appear to be cast “upon” the highly detailed background. But what we are doing is a very straightforward, two-dimensional process being applied to the various pixel-matrices that comprise each frame of our finished picture. Very efficient, very fast.
(And that massively detailed background? Yes, it’s one frame, rendered once, and repeatedly used in every output-frame involving a camera-shot taken from that particular camera.)
At this point, the three strips of film (one for each camera) are now “complete.” They show the performance of the actors, in their entirety, from each of three camera angles. Each frame includes material that changes from frame to frame (the actors), and material that doesn’t change but was simply copied (backgrounds, starships) having been rendered only one time.
Next, we put on our cinematographer’s hat. We cut-together each of the three strips of film from each camera, determining the sequence of camera-cuts that the viewer will actually use when observing the “fight to the death.” Sir John Williams furnishes us the soundtrack as a .WAV file and we put that in. We use the Sequence Editor to produce this strip. And thus we produce our final film.
Oops! Panic! Now the customer wants a little robot-dog (let’s call 'im “Lucas”) to zip into the scene! Or he doesn’t like the details of the handrail on our bridge. What do we do?!?! :o
Very little. 8) The customer is simply asking for another layer, and we generate that layer (“just the dog, ma’am, just the dog…”, or maybe a single frame with nothing but the changed handrail), and re-composite the thing and we’re done. No time-consuming re-rendering. Shots in which the dog and/or the objectionable handrail do not appear, do not even get changed at all. You could do it in fifteen minutes once you’ve generated the new material.
Terror! Two days before the film is to be proudly shown at SIGGRAPH, you discover a flaw in the background … the one it took you two weeks to render! Ahh, but it’s only a small area, so you fix it, render only the area containing the defect (from each camera POV), composite the “patch” right over the affected area in the (single…) background frames. Re-render the whole thing and nobody is the wiser, because the net result is that the problem truly is gone even though it only took you a couple of hours (after a lot of panic and beer) to fix it, and you fixed it perfectly.
All movies are made this way: separately generated “strips of film,” cut together in the final-edit to make the scenes you actually see. Sci-fi films scenes are produced by many layers of material combined, seamlessly and digitally, into one finished image.