A challenge for composting gurus

I really don’t know if this is possible to do, so let me try to find out.

Currently I am using a lot of compositing/render layers involving animating a character. Due to long render times I am doing the following

  1. I first render a still backplate image

  2. My actual scene consists of only the character and any props all on 1 layer. This is animated for whatever the scene calls for.

  3. In order to get the ground to accept a shadow from the character, I have a shadow plane… this is not visable in the final render, but it catches the shadows.

  4. The rendered character and shadows are then alpha overed the background plate all within blender and all at the same time. So I first load up my background plate into the node tree, then I hit render animation. It first renders the character and does the renders the shadows and the final step is that is alpha overed the background plate.

So that works all well and good providing I am only using 1 camera and that camera has to be “locked down” too match the background plate.

The challenge…

What I want to do is use 2 cameras and 2 backplates… lets say we have 200 frames and the character is animated that whole time.

-I would set up camera 1 in the timeline and use a camera marker at frame 1 than ctlB to bind that camera
From frame 1 - 100

-camera 2 would then be set up via a marker to take over at frame 101-200

My problem is that I have to somehow have a way to tell the compositer to “switch” and begin using background plate 2 at exactly frame 101, and alpha over that image, instead of background plate 1

Is it possible to do? If so, tell me what I would need to do different in the compositor.

Run the two images into a color mix node and keyframe the fac. 0 on frame 1-100 and 1 on frame 101-200

Damn…that simple … here I thought I was really putting out a challenging problem :slight_smile: Thanks Rich

I would suggest that you set up all your camera positions, as separately-named cameras in a master blend-file, and that you “shoot from each one, separately,” without the described use of “camera markers.” Don’t limit yourself in that way. Shoot the identical action from each viewpoint, creating two “strips of film.”

Now, edit. Whether you use VSE or Final Cut or some other editing software of your choice. You might well decide to trim the shots a little differently when you get to that point.

Also: I suggest that you start by shooting fast “OpenGL Renders” of everything, using the Stamp feature to include file-name, scene-name, camera-name, time and frame-numbers at the bottom. Now, use these for the editing-process thus described. When you are satisfied with the “final cut,” transcribe the shot-list. This tells you exactly what frames need to be rendered. (Add, say, a couple of seconds to either side, if you can afford the time.) You can then start dropping-in the high-res shots straight into the edit … they will, of course, match exactly.

“Edit first, then shoot.”

If you’re at all like me, you’ll find that your notions of what’s working and what’s not will change considerably “when you see it.” So, get something in front of your eyes as soon as possible.

Finally: don’t throw-away anything. If you want to tweak something, keep both versions. As long as you’re doing the OpenGL route, “film is basically free.”

Get to know Blender’s linking/library system very well, so that you can be certain that every shot is using the same source-data (animations and so-forth), and that if you have to revise anything you only have to revise it once, knowing that the revision will take effect everywhere. Keep a careful daily written log … I use a #2 pencil and a loose-leaf notebook. Yeah, a pencil. And paper. With the messed-up pages “X’d through” but kept.

I would suggest that you set up all your camera positions, as separately-named cameras in a master blend-file, and that you “shoot from each one, separately,” without the described use of “camera markers.” Don’t limit yourself in that way. Shoot the identical action from each viewpoint, creating two “strips of film.”

That would definitely be a better way of doing except for one very important thing… render times, taking this direction would essentially double or triple (based on how many camera angles). Currently I am fighting incredibly long render times. If the animated object is far away from the camera… I can sometimes skate by with 2 min per frame. But if the animated items fill up 1/2 or more of the frame I’m looking at 5 -13 min per frame. And thats using a backplate and only rendering moving objects. Then I have to dick around with all the shadow plane BS which definitely contributes to the min per frame noted above.

Your point is definitely correct and it would give me more flexability in the editing process for sure. But for this project it just isn’t feasible. So I guess I’m just doing it on the fly as I’m creating the animations.

start by shooting fast “OpenGL Renders” of everything

That is something I really need to explore, thanks for the recommendations sundial

Yah, I think that “render times” are our Universal Bugaboo. (Everyone’s got in a different flavor, but everybody’s got it, one way or another.) And I don’t of course know what might or might not be contributing to your headache. :slight_smile: But it won’t really make a difference to those render-times whether you shoot it “my way” or “yours.” (So to speak …)

The “OpenGL render” suggestion (“edit, then shoot”) will, I think, wind up saving you a lot of time. Especially if you are looking at those times-per-frame (and this for only one layer of the frame …), it’s imperative that you know that a frame will be needed, and exactly where it will wind up, before you produce it. You need to be able to make creative decisions about exactly how the piece will go-together … using footage that is “scrupulously accurate, yet cheap.” OGL can do that.

An OGL render will coincide e-x-a-c-t-l-y with a higher-resolution shot that is produced by any other means from the same setup. But you can produce such a strip in seconds or minutes. You can then “cut it” like you would with conventional film, exploring your options and “beats” without feeling that you had to sacrifice your firstborn to get it. (You’ll sacrifice him later… heh.)

(Furthermore… don’t overlook OpenGL’s power to produce product. Or at least, portions/layers thereof. It’s getting better all the time.)

This is very very good advice, and thank you for sharing this, sundialsvc4, I think I will pass this on to davida as a possible check before committing to render farm.

I just put the openGL into action, I really will be using this alot now… you can see it on the casino WIP I noted above. My walkcycle is still far from good, but man is it handy to get timing and flow down in a hurry.

Is there any way to view the output of the OpenGL render sequence in the viewport and see it realtime?.. for the one I just did, I brought it into aftereffects to view it.

You can view this in movie clip editor.
Remember to set RAM limit to be enough for your frames, open sequence - ‘P’ to “prefetch” (create RAM preview) and alt-A.
Maximize movie clip editor window before playback, so no other panels will update.

Thanks Bartek, hope your doing well. Maybe I could ask you, and some of the compositing gurus a question… Take a look at this node set up


Right now I’m doing the following…

  1. Render backplate high samples from my “master file”

  2. The animation I am doing ( with a way to catch shadows ) is then composited over the backplate in Blender. The shadow catcher thing is working pretty good and for the most part I’m catching the shadows and color from items being animated.

  3. But with this set up it forces me to actually render the foreground stuff over the backplate to be able to get the shadows.

What I would really rather do, is composite the animated stuff with shadows in Blender (outputted as a PNGA sequence) and then lay that over the backplate in AfterEffects. I can just do things much faster and efficiently there.I also have a lot more filters and plugins at my discrecion. While I know many composting things can be done in Blender. I would just like the flexibility to be able to tweek the background separately from the animated stuff. But the only way it seems to get the shadows in this set up seems that you must lay it over the background in Blender.

The shadows I think will only look right if they are multiplied transfer mode over the background plate in AE… but then that would not be correct for the other forground things that are not a shadow.

Is there a different way that perhaps I should explore? Due to render times I am forced to do this backplate setup. And the lack of some kind of shadow catcher in cycles further complicates things.

what I’m not postitve about yet is does this all work with cycles …

Indeed, it does. Consider this… "In Blender, you have four different, separate ways to go from “an idea” to “a picture”:

  • OpenGL / “Game” Render
  • Blender Internal (BI)
  • Cycles
  • << some external renderer of your choice >>

All of these will be presented with identical geometry and camera-positions, so their outputs ought to be “visually compatible.”

Meanwhile, in any project, there are at least three different “stages of sheer-panic”:

  • “You don’t know, yet, what you are doing.” That is to say, you haven’t decided. At this point, “you want options,” and you want them to be “cheap!” (Cylinders stand-in for figures, cubes stand-in for props, and there’s a lot of hand-waving … but … “everything’s to-scale.”) We need it now.
  • The general strategy has been decided-upon, and now, “you want to zero-in on the particulars.” (The models are finished, the camera positions are decided-upon, we’ve done the final-cut edit, and we’re finalizing the shot-list.) It must coincide exactly with the final shot … but meanwhile … we still need it now.
  • You know exactly what you want … you’ve got the results of step #2 … and “now it’s a matter of grinding through the hours,” while making sure that all of the shots are visually-compatible when they are all eventually viewed together. We’re ready to pay our duesZzzz…, provided(!) that we must only pay them once.

In the first and second stages of that “panic,” OpenGL (“preview renders”) are hugely helpful, because they can quickly give you accurate renditions of “what the camera will see,” without taking hours or minutes per-frame to do so. The math is identical … the inputs are the models, the camera-positions … only the level-of-detail is reduced.

OGL outputs are “comparatively cheap-enough” that you can afford to “leave film on the cutting-room floor” while doing even a very-detailed “final cut” of the presentation. (You can “shoot a few seconds either side” and not scream in pain…) Thus, “when it finally comes to that,” you can grind through the computationally-grueling third stage of “OMG cpu-smoking renders,” more-or-less confident that you aren’t doing any more work than you absolutely must. You’ve already done your homework. You know where every single of those frames will go, and that you will use them.

During stages 1 and 2, human imagination is a great asset. During the first stage, we see “enough,” soon “enough,” to keep the creative juices flowing. During the second, when things have gotten to the brass-tacks, we can actually make enough detailed decisions, based just on what we see, to give us confidence that … when we toss that “okay, now we gotta wait 18 hours per-frame and hope for the best” dice … we’re gonna get those “lucky sevens.”