Combining renders in the compositor

I’d like to combine two renders in the compositor. Is that possible? My goal is to render certain layers within my scene, then render some additional layers, and then combine them in the compositor. I’ll be rendering from a script, if that makes any difference.

Anything approximating the described work flow might work.

If I can’t do it in the compositor then my backup plan is to do it outside of Blender. I can get a z-map out of Blender, right?

First of all, yes, the RenderLayer node can be pointed to any layer in any scene within the current .blend. Also, if you need something else, or you just don’t want to re-render when updating the compositor, the Image node can be used to ingest pretty much anything from disk.

I would recommend rendering, then saving as a Multi-Layer exr; that way, if blender crashes, or something else happens that causes you to lose the renders, you can just re-open the file.

Watch this tutorial:

Looks like a monster spagetti at first, but is not that hard. Gives you multiplie possibilieties if you know how to use it.
I really recomend to lern it.

Three answers, each helpful. Thanks!

I learned a few things from the CGCookie tutorial, including some which weren’t the focus of the lesson. However, near the end, when he created the second scene with which to perform the renders, I got confused. Can anyone explain that to me? I’m thinking that was an important step.

And he obviously knows the compositor better than I do, so I’m guessing the must be a reason he spent so much time arranging his nodes as he did, rather than using node groups, but I can’t understand why. I found all of the switches used only for labelling purposes to be quite distracting. And when he connected two identical, single-input switches in serial…

I’ve defined two render layers, and have set up the compositor to combine the two. If I select one render layer, render it, and then select the other render layer and render it, then the result is the composite of the two. That’s great. However, since I’m rendering from a Python script, I can’t sequentially select each render layer. I thought that the second scene technique shown in the CGCookie tutorial might work for me, but it doesn’t, and I’m still confused by that whole thing. At 49:04 he creates a second scene, removes everything from that scene, and pressing F12 renders the first scene. I just don’t understand that, and when I tried it myself it did exactly what I would have thought it would do - nothing. If F12 renders the active scene, and ‘COMP’ is his active scene, then why does the first scene render.

I do this a lot… so here goes…

Set up your project, first of all, by planning the whole thing out ahead of time using the “OpenGL render” features of the 3D window. Decide exactly what you want to do, what your camera angles and so-on are going to be, using these fast and cheap methods. Then, do a “shot breakdown.”

Fair warning: there’s no hard-and-fast rules for this. Quite literally, you are now starting to engage in a bit of “computer programming.” You’re figuring out, if I may cautiously advance this notion, what streams of data need to blend with one another in order to produce the “combination of beautifully colored pixels” that you have in mind … and what is the cheapest, fastest way to get them.

You need to set up your “OpenGL renders” quite precisely, with separate cameras for each shot, a “scene” for each shot. You should generate these renders (with the “stamp” tool enabled and all the goodies turned-on), and I suggest that you actually do a rough-cut of your show using them. From this point forward, you do not want anyone (yourself, your spouse, your kid, or your client) to say, “well, what if we did this instead?”)

You’ll get really good at record-keeping. You break down each shot on whether it will move or not; whether you may need to adjust it or not; whether it’s expensive to produce by itself. You’ll build either a complete blend-file or a scene for each one. Use the RenderLayer feature to isolate each set of variables (I told you this was programming!): “combined,” “z,” “alpha,” “velocity,” whatever you have determined you will need.

Output each piece to MultiLayer OpenEXR files. This is a high-resolution digital file format, originally devised by Industrial Light & Magic and publicly extended by Blender(!) Foundation, designed expressly for this purpose. None of the digital data will be lost, or compressed. It will all be there, identified by its RenderLayer name, and the Image node in Blender knows how to use it.

Bottom line: in exchange for a bit of planning and strategizing, and a heck of a lot of record-keeping, you get (a) save days of rendering time, and (b) have real choices for “finessing your shot” in the compositing and editing stages.

And here’s why: in the real world of deadlines, customers, etc., we have to treat “render time” as an investment. We’ve got to “get it right the first time” … and in order to do that, we’ve got to make “get it right the first time” be a multi-stage process. Exactly the same concept is used in multi-track music recording: the raw material is first planned, then captured (“recorded,” “rendered”), then assembled into a product … doing so in such a way that the people who are sitting there in the mixing-room doing that assembly step have the maximum number of realistic options at their disposal without “calling in the talent ($$)” to do yet $$ another $$ take.

Plan. Accordingly.

Heck… even for “your own, this-is-supposed-to-be fun” projects, you want to be efficient! Yes, “this level of” planning pays off. “Work smarter.”