Compositing and Layers (UGH!)

I’m trying to get my head wrapped around on how to use the compositing system in Blender. My scene is rather simple. It has a planet and a skydome. In the compositor, the planet has several adjustment nodes to increase blurriness, adjust RGB, etc., I’d like to have the background/skydome use different adjustments, but it’s all getting fed into the same render layers as the planet.

–How do I separate the background to make adjustments to it, but still have it feed into the final render? I though putting into its own render layer would help, but I’m not sure what’s happening…

–When I place the skydome into a separate layer, and though I have all the layers selected only the planet will render. I can only get the skydome to render with the planet if it resides in the same layer, but then it gets all the same adjustments, so…

–How can I separate my background from other elements so it doesn’t get affected by all the nodes that are affecting the planet (Blur, RGB curves, etc.,) but still make it into the final render?

Thanks much,
~Chaz

Got it. I have discovered that creating separate ‘Render Layers’ and combining them with ‘Alpha Over’ node… cue <angelic chorus> does the trick.

Compositing it always working in layers though a node based compositor as Blenders visualize it diffrently than a layer based compositor like After Effects… But they do the same thing, it’s always layers upon layers in compositing.

So what you need to do is go through it time and time again until your brain gets it, how the nodes represent layers. And then, when you’re brain is on the right track, it’s just about doing it. And I believe the easiest way to teach your brain how it works are tutorials, must of the later ones at BlenderGuru and alike use the compositor towards the end of the tut, so just watch it & then watch it again until you get it.

And I’m a layer kinda guy, I came from Photoshop through After Effects to Nuke. Even though I have som experience in working in nodes in Maya, first project in Nuke was hell, just wrapping my head around the visualization, mentally converting my layer based problem solving skills to thinking in nodes… And, to be honest, I still, instinctively understand layers better than nodes, but you can do more complex stuff with nodes, no doubt about it…

Edit: I had to take a call, so slooow responce and you got started already, hehe… But yes. It’s alpha over and the mix-nodes for blending stuff on top of each other, and from there it’s just to tackle each problem as you run into it… :smiley:

Thanks Farmfield, I actually don’t have a problem with working with nodes since I’ve used Maya and Houdini for quite some time. My problem was breaking things apart so I could affect single objects. As from my previous post I poked around for a couple of hours and posted what I thought were good results… not so much. It works but instead of using render layers I instead created an extra scene inside my file… so now I’m trying to figure out how to get the objects of the separate scene into just one.

If you know how, I’d love to hear it since I have yet to be successful at importing another blend model into this one.
Thanks again.
~Chaz

Houdini. Nodegroups with nodes containing nodegroups. It’s like the onion-discussion in the first Shrek-movie, hehe - I don’t do that. ;D

And if your problem is rendering out scenes for use in the compositor I think the easiest way is to render them out to images and then using input->image nodes in the compositor rather than doing it through render layers… That’s imo a better workflow anyway, doing modelling/rendering and then compositing as if it were two different apps, this all-in-one-thing is nice until Blender crashes… :wink:

But otherwise, you can choose scene as well as layer in the render layers node, but I think you gotta render the scenes separately - and there is where I propose rendering to images and importing them, rendering the scenes separately feels a bit to dodgy for me - I feel there is potential for loosing work/time, I don’t like that feeling. :slight_smile:

Agreed, which is why I’m trying to fix it. Plus, I’d really like to get my head wrapped around the way Blender is designed to function… just so I know what’s supposed to be happening in case there’s a need to isolate an issue. Obviously is harder to backward engineer or troubleshoot a work-around.

My wife also happens to be a game developer and sat with me this morning to figure why the skydome was interfering with the lighting of the planet in my scene. In a test scene we successfully got it to work by disabling ‘traceable’ in the shader options. Hopes are this works in my final scene… we’ll see.

Nice chatting with you Farmfield and I’ll post the results should it work.

I love Blender, but it’s not designed in one go, it’s more of a mish mash of functions and even though it’s pretty complete, there’s seldom an obvious workflow… But that’s ok for me. When I started out, years ago, like in Photoshop before there were layers, it was all workarounds, problem solving. I like that, it keeps it interesting. ;D

Well, good luck and hope you get it all worked out. :slight_smile:

If you create a Link between an object in one Scene and a different Scene, that object can then be made a local object, or simply remain a linked object. See sub-menu items in the Object… menu item in the 3D View, Object Mode.

In the Compositor, a Render Layer can reference another Scene and its Layers just as the current Scene can be accessed. Just above the drop-down list where you specify a Render Layer for the node is a field for selecting the Scene to be accessed.

But how does that work when rendering? You need to render the scenes one-by-one, right? That’s what I don’t like, I’d rather render them separately as images and then re-import them into the compositor thus not needing to re-render the first scene if something does wrong when rendering scene 2…

Also, I personally think it’s a more logical workflow, modelling/lighting/rendering as a separate first step and then compositing as a second step. Even though it’s easy to do it in one step in Blender, in all other cases/other software, you do this as two steps - and/or even by different people, kinda depending on how you work…

This basically is the workflow that I use … although, yes, Blender does not (yet) have an all-in-one, soup to nuts “big picture view” of that. You have to do a lot of things “by hand,” but the general strategy is this:

  • Decide what the whole thing’s going to look like, doing quick “preview” renders to make, in essence, animatics.
  • Break down each shot into its separately-handled components. Develop a library of common objects: props, materials, color swatches, sets, cameras, and so on. These will all be linked assets.
  • Make blend-files, perhaps having related Scenes (although I tend not to do that anymore) to produce each piece. Use the MultiLayer file format for everything.
  • Create separate blend-files which use those MultiLayer files as inputs to produce other MultiLayer files as outputs.
  • I actually employ Unix makefiles to express the inter-dependencies within the project. I would love to have a “master project view” tool that would show me the big-picture and let me centrally manage the overall work and data flows.

@ Farmfield – yeah, I also find it best to find ways to composite in post-rendering, but the OP’s question (and the answer) was how to get objects from one Scene into another, so the answer was limited to that.

That being said, there are also instances where interactivity between Scene elements demands that all compositing take place “in camera” so to speak, something nearly impossible to do with film technology but well within digital capability.

To illustrate, reading sundials’ recipe (which is very good one!), I often wish I could work that way in all instances, but very often I compose “en scene,” with only a few major landmarks in a piece pre-envisioned. So I’m constantly modifying lighting setups, animation timing points, action details, even materials as I get from frame one to frame end. This wouldn’t work well for team projects or pieces of substantial length (greater than a few minutes), but it does let me be more spontaneous when developing shorter works.

@chipmasque

You did not answer my question, though. When using multiple scenes, do you render them separately or can you set it up to render in one go? I don’t have the time to set it up & test just now, would be great to hear how you do that, workflow-wise, compositing layers from separate scenes without saving the output. :slight_smile:

About doing comp as a post render thing, that’s kinda what Blender does the worst. The compositor is designed to be used internally more than with external files, you see that in not having a scene/output setup accept the bottom plate a.s.o… I hope they ‘fix’ that during Mango as this is more important when doing VFX than when you just do stills, arch. wiz and all that…
I really miss having a fixed workspace when compositing.

OK, here’s how I use separate Scenes in the Compositor. In the Main Scene (a label of convenience), I set up any Render Layers needed per usual. In a separate Scene, call it Aux Scene, I set up any Render Layers needed to achieve whatever effect I have going on there. Then in the Compositor for Main Scene, I use a Render Layer Node but set it to access the Aux Scene’s Render Layers, per the field I mentioned. Then it gets put into the noodle as needed for single-pass rendering.

One example benefit is that Aux Scene can use a completely different camera & lighting setup than Main scene. I used this to do the 2D mock-HUD/interface for portions of the Quantum Information Processing vids I did for Oxford a few years back. Because I was using a render farm to meet the deadline, I couldn’t afford the time to do separate renders and post-composites, so the overlay was given its own scene, graphics animation and an ortho camera, then compo’ed over the main scene, all in a single pass.

Another use I’ve put dual-Scene setups to is mock rostrum-style multiplane background animations. Again using an ortho camera, background plates processed to move differentially to create mock parallax were used as the BG for perspective model work in the main Scene. Works well, looks cool.

As far as Blender’s post-render compo work, it all depends on what you want to do with it. I’ve managed to find ways to do just about anything I needed, including creating oversized workspaces for creating split-screen FX, etc. I won’t say the possibilities are endless but I find it a very versatile tool in many scenarios, both single-pass and post.

So, when you render, Blender will render all scenes in one pass? For me it’s about duplicating scenes and rendering the same scene with BI and Cycles and then composite it all, but if I understand it right, if I re-render I re-render both scenes then, I cannot chose to render the Bi scene only?

And a big thanks, this is nice just not needing to start from scratch, experimenting blindly… :slight_smile:

You can’t use two rendering pipelines simultaneously, methinks that’s asking a bit much from any 3D package. While different cameras can be used in different Scenes, they both have to feed into the same rendering engine in a single pass. At least, that’s how I understand it. I really haven’t done much with Cycles since it’s still immature in terms of the kind of capabilities BI offers, plus most of my work so far isn’t going to benefit hugely from a “photorealistic” approach to the camera. I prefer a more stylized look to even what some might call my “realistic” work. That can change any time, of course, so I’ll start working with Cycles soon in order to get up to speed for when it’s matured fully.

So then it’s render to image and re-import, like I thought, hehe… But I should have clarified that I was intending on splitting between Cycles and BI. Sry.

It’s the opposite for me. I do 90% arch. wiz. and Cycles is perfect from a lighting standpoint, but as Cycles is buggy with reflections and such, I’m thinking of a workflow combining best of both worlds. I do this today using the Maya scanline renderer and Vray compositing in AE or PS depending on the output.

But what I probably need to do is to take a project and do it in Blender fully, I’m not in a panic, though, for me the question is if I’m to switch from Maya to Blender during this year - and there’s still a few months left on it. :wink:

I’d be interested in the results of comp’ing BI +Cycles plates. The cameras seem so disparate, I wonder if there will be any diffs in field of view and image-size-per-magnification issues, given identical camera specs. A lot depends on the camera algorithms for the two engines. Back in 2.49 and earlier, the default camera implementation was not very reliable for use in matching real-world camera shots, but I don’t know how much that changed for BI in 2.5x and above, and in particular how it might have changed for Cycles, which seems to attempt a much closer sim of a physical camera system.