Howto: Saving render-time with compositing and NLA

I’m doing several scenes which take place in an outdoor shed with some waterwheel-driven machinery. Characters move here and there. There’s a bucolic scene in the background, a flowing stream… And hours of render-time. %|

I was able to drastically reduce that time by using compositing. It’s like the “blue screen” technique they use in movies. After building the original blend-file, I made copies of it so that I could generate, in separate .TGA file directories, these layers: - The background, which since it doesn’t move, is just one frame. - The moving stream, which animates in about 26 shots. - The machinery and wheel, which animate in 45 shots for one rotation, with motion-blur. - The people. Each is shot to a .TGA file (could also be .PNG; can’t be Jpeg or AVI-Jpeg) with the RGBA button turned on.

In each case, only the necessary layers are turned on. The rest is world-color, blue-screen. Significantly, the Alpha value for the action is 1.0; for the blue background it is zero.

In one case, where the machinery is sandwiched behind a foreground, in that shot (of the machinery) the materials of the foreground object are altered to have Alpha=0 but those layers are turned on. So the object itself forms a mask for the stuff that is behind it. Only that shot is rendered with motion-blur.

The pieces are composited together in the Sequence Editor, using the AlphaOver and/or AlphaUnder filters.

Notez vous: You must use a file-format that contains support for the “Alpha” information, such as .TGA or .PNG. And you must request that it be generated by pressing the RGBA button.

If you output to .TGA or .PNG these are separate files, one for each frame. Add them to Sequence Editor as Images, not Movie, and specify (say) “*.tga” as the filename in the requestor to select all the files. Blender will understand…

I’m still tweaking the movement of our people in this shot, but when I do so the only thing I have to re-render is the people.

By: - rendering only what I need (one frame of background, 26 frames, 45 and so-on), and repeating those strips as needed; - generating only the machinery-in-motion layer with motion-blur; - avoiding repetitive re-rendering of things that don’t change… I have been able to slash the render-time for this and other shots, and become much more productive.

I can see applications for this technique even in static shots. The specular “gleam” on a surface could be rendered using a particle-system … in a layer with nothing else. Hard-to-render foreground objects could be rendered by themselves, and sandwiched against simpler backgrounds. You can render “Only Shadows” and paste the shadows or the highlights (“Just the shadows, ma’am”) onto an object that was otherwise rendered using cheap, fast, shadow-free area lights.

Any technique that delivers good results in less time is just fine with me, and this certainly appears to be one.

another way make your blend more efficient is to segment it into scenes, and way over on the left in the render buttons click ‘link to this scene as a set’, or something like that. I don’t know why, but it seems like blender can digest large scenes more easily that way.

sundialsvc4: i’d love to see a static composite of your shot! This workflow is very common in filmmaking these days. Layers include mattes - painted backgrounds, live action on bluescreen, and CG set extensions to name a few. Just read Cinefex magazine. It’s all there.

One thing that would be nice to see in a future Blender would be pass rendering. Shadows, speculars, diffusion color, etc. rendered in separate layers or files to be combined in AfterEffects or Photoshop - or even BlenderSequencer. The power of this is, as you have found, the flexibility in making adjustments - without re-rendering. So if the shadow is too dark, change it’s opacity and your done!

Actually, I had really bad experiences with scenes, not because of anything Blender did but because of my own ham-fisting at times. I wound up getting into situations where I could no longer quickly and easily reproduce a shot that I had rendered before. I wound up doing a demo with a flawed shot because I couldn’t reproduce it to fix it.

Compositing layers together, and working with lots and lots of separate .blend files (one for each layer), seems to work best. I won’t say that it’s an ideal solution because it isn’t. There are certainly workflow issues.

I definitely agree. I’ve already discovered “shadow only” materials, which I use for materials that won’t be in the shot but which need to mask off what will and won’t be visible (and which will receive shadows). This allows the overlay to carry the shadow-information that will be applied on top of the background material during compositing.

But true pass-rendering would be a big help. As would support for hardware rendering via video-cards! I’d gladly pay $300 for a honkin-powerful (supported…) video card if it would help me render faster.