Realistic Renders only with Compositing/Post Processing?

I’m mostly a 3D modeler and renderer but with little experience in compositing. What exactly are things that rendering only can never achieve (Cycles) but only with compositing/post processing for a realistic output?

I’m aware that Bloom is (maybe?) one of them, any others?

There are no nevers, it is just not efficient to render everything. Some things are a lot easier to be done in comp than in render due to either the amount of work it takes to set everything up in render or time it takes to render.

Blooms, glares, lens flares, vignettes, you name it, can all be rendered, at least in theory. But in practice it will be difficult and very time-consuming. The dividing line is usually easily drawn when you think in the lines of “will making a change in aspect x take more time in re-rendering than doing in comp?”. Usually it all comes down to how fast you can make changes, because changes there will be.

Just from the top of my head some things that are more efficiently done in comp:

  • color changes, both light, objects, overall tonality, you name it;
  • changing different elements. Want to swap character animation without re-rendering whole scene?
  • atmospherics and optical effects: blooms, flares, vignettes, light wraps, fog;
  • blurs, all kinds of blurs. Lens blur, motion blur etc. There are downsides to it though;
  • chromatic aberrations, lens dirt and other defects;

And the list goes on and on.

1 Like

The great photographer Ansel Adams once famously said that “An image is captured in the camera, but made in the darkroom.”

Although the technology now has completely changed, the principle has not. Cycles in particular will behave as “a very literally accurate camera” in capturing an image, yet produce a bland and un-attractive photograph … for much the same reason that a (say, non-film) camera would produce the same thing in a similar real-world situation.

(There’s really no such thing as “photo”-realistic … or, a “realistic” photo.)

The easiest way to turn that raw-material into an image that is both “suitable and compelling” for any given delivery-target is to use post-processing. You can precisely and efficiently adjust the material, using very rapid processes that can be made aware of factors such as object-ID and Z-depth … something a darkroom jockey (like me) could never do.

Time also continues to be a big factor. A single render of a single frame might take dozens of hours – and if the entire frame didn’t turn out just right … what? … you just have to do it all over again? A more efficient process is needed: one that we can control in order to meet deadlines. One very important way to do that is to break down the scene into components, and to “layer” multiple effects-passes, each perhaps calculated separately or using parallel hardware. Instead of attempting to render “a scene” … (“Shazam!” Ooooh! Ahhh! Okay, it’s still not right, but we’ve run out of time.) … we render components of it for later digital “darkroom work.”

And then there’s the pragmatic observation that Blender’s various available rendering algorithms are different – very different – and yet, complementary to each other. BI is a ray-tracer, following a light-beam from a light-source to a target, while Cycles is a path-tracer that works in the opposite direction. (And there are more options beyond these “big two.”) In Blender, you can use these technologies in tandem, then take it all way to the finish-line without ever once leaving Blender.

Blender’s now thoroughly node-based architecture gives you all the control that you could possibly need, and does so at every turn. This, once again, can save time as well as offer much more artistic control over the final result.

I had do do a DVD bumper logo at my day job which consisted of a spinning bar that reveals the logo, all in 3D. Compositing was easier to make the reveal rather than animating the mesh to appear in the render.

With regard to my previous post, I want to repeat that "you don’t use multiple render-engines at the same time, nor do you typically render “the entire scene” or even plan to do any such thing.

You use a series of renders, putting all of them into MultiLayer OpenEXR files, and engineering each one to be numerically compatible with the rest.

In one of the very-first (conventional …) photo workshops that I ever attended, the great Jack Dykinga said, “look at the light.” He kept saying that, over and over, until I finally understood what he meant. :yes: Don’t look at the scene that you’re trying to photograph: look at the light, because that’s what you are actually going to (try to …) capture, and then eventually project or print.

I naturally gravitated towards “darkroom work,” and still have a spacious darkroom in my house where I still love to go and play with exotic, dangerous chemicals. :wink: I visited the Center for Creative Photography (CCP) in Tucson, Arizona and got to hold (a copy of …) Ansel Adam’s original negative for Moonrise, Hernandez, and to compare a contact-print made from that negative to Ansel’s subsequent published work. The little lights kept turning on in my head, one by one by one.

I read everything that I could get my hands on by – or about – photographer O. Winston Link, who captured the “steam age” of railroading before it vanished. And who did so with thousands of flashbulbs. (“You only get one shot” at “a shot like this.”)

I assisted in (schlepped gear on …) a photo-shoot which captured the beautiful, sun-lit interior of a hotel … at two o’clock in the morning on a moonless night. I watched pros shoot photos of everything from coffee-cups to chrome. (Even today, there are specialists who shoot nothing but chrome, and automobiles. One of 'em quipped, “I specialize in visual sex.”)

When that happens to you enough times, words like “photo-realistic” fall by the wayside.

I rather suspect that many (Cycles …) renders of “real world scenes” seem drab and flat, and certainly don’t match the pictures in magazines, because Cycles is giving them perfect-truth (sic) of the setup that they used, while their reference photograph was not made that way. The renderer gave you precisely what you asked for – which is not what you were looking for.