I suggest that you should read the chapter on Advanced Nodes and on Nonlinear Video Editor in Roger Wickes’ book, “Foundation Blender Compositing.”
Current versions of Blender provide all the things that you need to quantify your assessment of the image, e.g. Vectorscope, Luma, and Histogram, and to make appropriate adjustments to these various profiles.
The key factors, I think, are quantification leading to subsequent consistency. You want to be able to measure those colors, those distributions of values and so-on, and then to make changes and to re-measure those results. You can establish a baseline, numerically, and then calibrate the various shots to them, especially during the compositing steps of the workflow. (That is to say, you can adjust “the pieces of the picture” before, during, and after “assembly.”)
These tools give you the means to describe, mathematically, things that folks like Ansel Adams described more informally, i.e. his “Zone System.” (He knew all about densitometry curves of black and white photographic film, but described them in layman’s terms to great effect and fame.) This numerically-based point of view allows you to relate these numbers to, for example, the characteristics of a standard LCD display, or a Pantone color profile, or whatever it may be that you need to match.
“Eyeball it?” Sorry, not my eyeballs, which are going to mess up red-and-green somewhat. And, the more you look at any image, the more your eyes get tired. (There’s a famous optical illusion based on that.) A photocell, or its digital equivalent the computer, never gets tired or out of calibration.
Although Blender does not provide anything “automatic,” it could be argued that “automatic” isn’t really possible anyhow. It’s a “crutch,” at best. The controls are easy to use and there are only a few of them anyway. When you’re looking at the right data presented in the right way (as Blender does), it’s just a matter of a little practice.
Furthermore… it’s 100% applicable to every image-handling situation you might need to deal with: CG or not, video or Game Boy, printers, and even photographic film and paper. The principles and the physics are “always the same, only different.”
Working in a linear color space, i.e. with “color management” turned on in Blender 2.5x, is also fundamentally important. If you are, for example, dealing with a JPG image from a digital camera, you need to remove the (mathematically known…) effect of the Gamma curve that has been applied to it by the camera, do all of your manipulations in a simple linear mathematical space, and then re-apply the desired gamma on output. All of which Blender 2.5x will do, more or less, for you. (You just have to understand what’s going on, knowing what and where the relevant settings are, etc., to make sure it’s all being done as you intended.)
(And if all that I’ve just spouted is making you say … :eek: “WTF?!” :eek: … relax. It’s actually easy and common-sense, albeit unfamiliar at first. And it’s all extremely well-documented on the Internet, and in Roger’s fine book.)
(Whew! How I do ramble-on sometimes …)