Baked shadows and moving shadows

I need help with this one:

To reduce render time on a movie, I wanted to bake all the soft shadows etc onto the backgrounds, and only render these onto the characters in real time. Probably using a compositing process to combine the background and the characters.

This gives me two problems:

  1. Can I still have the character throw shadows on the background/floor? If so, I suspect by compositing again, how? The baked version has to be set shadeless, so it doesn’t take shadows directly.

  2. If I need the lighting intensity to fluctuate as part of the scene, (not the location, just the intensity, as it is a flickering lamplight) is this something I can do by varying the textures in sync with the light levels?

My current thinking is to a) render the background in detail. b) render the characters as throwing shadows only, onto a simple version of the set, tracking a small spot onto the character, so that only that part of the scene is lit, and thus reducing render time, using a ‘shadow’ light. c) rendering the characters alone with no set to get the lighting on them, again only lighting the characters, rather than large areas, to reduce render time, then compositing the three things together. Is there an easier way?

Using 2.49 by the way, as I really don’t think 2.5 is ready for production work yet, - especially as it crashes regularly on my machine.

Matt

You are definitely on the right track with regard to compositing. As a thorough and complete discussion of that topic, I’d suggest that you pick up a copy of Foundation Blender Compositing, by Roger Wickes. (You owe me $5, Roger… :yes:)

I suggest that you plan on breaking the problem down completely. Using a linear workflow throughout, set up each scene with basic illumination that does not cast shadows, such as my personal favorite, the “hemi” light. (Turn off all shadowing: when you do use shadows, use buffered shadows.) That becomes your source of both “diffuse” and “specular” lighting information, which you draw-off as separate channels.

Next, use shadow-only lights and buffered shadows to obtain a shadow-pass, overriding all of the materials with a shadow-only material. Now you have “pure shadows.” This channel describes the location and intensity of the shadow (from this particular group of lights, on this particular set of object-layers, possibly as occluded by these layers), and nothing else.

Let’s say now that you’ve want some modulated light, you can either modulate the two lighting channels that you’ve already got, or capture separate tracks of modulated light (and if need be, shadow). You probably want to do the latter, because you’ll need to be able to apply the same modulation to your actors’ performances.

Moving on to the actors now, you set up separate layer-specific lights that illuminate only them. (Notice how you can set up the RenderLayer so that some layers provide only masking and shadow-receiving but do not appear in the color output. This is why.)

When you’re done, you’ve got a whole bunch of distinct, multilayer “tracks” of data which you now composite. You can adjust each component of the shot, and discrete characteristics of each component of the shot, all without re-rendering. Because the entire color-space is uniformly linear, lighting and compositing operations work the way they’re supposed to.

Notice, please, that you are not “baking” anything into anything else. The one and only time when you are actually combining these many tracks of isolated information is during the final composite process. The result is very much like “baking,” except it is completely adjustable during the compositing step.

The rule-of-thumb during a composite is that you’re isolating information based on why it’s there, and what’s causing it, not just where it is or what it looks like. (And, through object-ID channels, you’re able to objectively determine that.) Pretty much anything that you might want to “attach a knob to” later, you isolate and identify.

Thanks for the reply. Not sure if I get what you are saying. If I don’t bake any of this, then I have to re-render every layer and every light for every frame. Now, I don’t like shadowbuff shadows, because unless the scene is small with consistent sized objects, the shadows become either jagged or with little definition. Ray lighting with multiple samples gives a much better result. Yes, it is possible to bump the size of the shadow buffers up to get the same result, but it soon reches the same render time as a ray shadow pass.

However, any of these take a lot of render time, and I wanted to separate the pass of texturing and shadowing of the set, which remains essentially constant, from the characters. However, this give me the problem of shaodws of the characters ONTO the set.

Perhaps you can explain how to save render time on this process.

Thanks,

Matt