macth color correction for blender?

hello

i’m look for a tool for correction: see an image example:
http://www.yana-studio.it/up2/intuitive_match.jpg
first image: video original
second image: image (i chose that color like to copy)
third image: final video: converted or changed from second image ( “capture” / “snap” for color)

sorry i’mnot know that is correct name this technical…:confused::no:: correction automatic?? or match color?? or color adjustment???.. not know… but snap color from image or video to convert to video fixed… i’m not good for explain you…
but to see top an image…so you can understand it?

or there is addons for blender tool correction?

Blender doesn’t have an automatic color correction button but you could probably do it manually in less than a minute

Moved from “General Forums > Blender and CG Discussions” to “Support > Compositing and Post Processing”

I think that I would remap the tonal range with a gradient, using color picker for black and white points, sample the reference. Then use the mix tool to alter the colors.

EDIT:
tried all sorts of mix types and black/white remaps, but no dice. Sorry. Looks like eye ball color correct is the easiest way to go.

If you use the compositor, you can save the noodles and re-use them with minor tweaking. Minor tweaking will always be required.

If you’re using the sequencer, the quick fix is to use an adjustment layer to grade a clip and duplicate that to the rest of the clips. Again, it is highly likely that minor tweaking will be required.

I suggest that you should read the chapter on Advanced Nodes and on Nonlinear Video Editor in Roger Wickes’ book, “Foundation Blender Compositing.”

Current versions of Blender provide all the things that you need to quantify your assessment of the image, e.g. Vectorscope, Luma, and Histogram, and to make appropriate adjustments to these various profiles.

The key factors, I think, are quantification leading to subsequent consistency. You want to be able to measure those colors, those distributions of values and so-on, and then to make changes and to re-measure those results. You can establish a baseline, numerically, and then calibrate the various shots to them, especially during the compositing steps of the workflow. (That is to say, you can adjust “the pieces of the picture” before, during, and after “assembly.”)

These tools give you the means to describe, mathematically, things that folks like Ansel Adams described more informally, i.e. his “Zone System.” (He knew all about densitometry curves of black and white photographic film, but described them in layman’s terms to great effect and fame.) This numerically-based point of view allows you to relate these numbers to, for example, the characteristics of a standard LCD display, or a Pantone color profile, or whatever it may be that you need to match.

“Eyeball it?” Sorry, not my eyeballs, which are going to mess up red-and-green somewhat. And, the more you look at any image, the more your eyes get tired. (There’s a famous optical illusion based on that.) A photocell, or its digital equivalent the computer, never gets tired or out of calibration.

Although Blender does not provide anything “automatic,” it could be argued that “automatic” isn’t really possible anyhow. It’s a “crutch,” at best. The controls are easy to use and there are only a few of them anyway. When you’re looking at the right data presented in the right way (as Blender does), it’s just a matter of a little practice.

Furthermore… it’s 100% applicable to every image-handling situation you might need to deal with: CG or not, video or Game Boy, printers, and even photographic film and paper. The principles and the physics are “always the same, only different.”

Working in a linear color space, i.e. with “color management” turned on in Blender 2.5x, is also fundamentally important. If you are, for example, dealing with a JPG image from a digital camera, you need to remove the (mathematically known…) effect of the Gamma curve that has been applied to it by the camera, do all of your manipulations in a simple linear mathematical space, and then re-apply the desired gamma on output. All of which Blender 2.5x will do, more or less, for you. (You just have to understand what’s going on, knowing what and where the relevant settings are, etc., to make sure it’s all being done as you intended.)

(And if all that I’ve just spouted is making you say … :eek: “WTF?!” :eek: … relax. It’s actually easy and common-sense, albeit unfamiliar at first. And it’s all extremely well-documented on the Internet, and in Roger’s fine book.)


(Whew! How I do ramble-on sometimes …) :smiley:

By match by eye, I meant eye with scopes. Just another way of saying doing it by hand not automation.

Anyway, I thought some more about it. You need to define the sample points of a source image. So:

  1. I load the color source in a window and the image to be colored in another.
  2. I set up a ramp with 3 points - Darks-mids-whites
  3. click each ramp point and sample the color you want from Color Source image.
  4. Route the Image source through the ramp and mix it into itself using Add node @ 2.2 (?)
  5. Send that through a Curve node to fix contrast if required.


cc remap test.blend (1.5 MB)

@sundialsvc4, thanks for your “rambling”, It was quite insightful. Does the book you recommended cover those topics? If not, could you recommend something else?

3point it’s really interesting to see this setup. I’ve never though of using the ramp node with three control pointers and sample blacks/mids/highlight for each. I always use the rbg curves node for this as it instantly provides black and white sockets to sample. The obvious thing to do then is to sample the blacks from one pic and use that as a black point for the other. But this setup is really clever.

Btw, this is how I would (ideally) grade in the node editor using your blend file as an example.


By all means man, don’t stop rambling! Pleeeaasse! Please? :slight_smile:

Even for topics I know well (or better said, I think I know well), I always find something new to think about in your posts, a different angle of looking at things etc. I really admire your style of providing truly simple explanations of concepts. Now, that’s a manifestation of genuine expertise!

Especially now that Yellow does not actively post much on BA anymore, your posts are amongst the few that I truly enjoy in a pure educational sense.

Well, Roger’s book covers many of these topics just not on the chapter sundialsvc4 mentioned (I’ve just checked). It’s the definitive reference material for Blender nodes and I’d certainly recommend it.

Other books on the subject which are amongst the essential readings:
(a) Brinkmann’s all time classic: Art & Science of Digital Compositing (http://www.amazon.com/Science-Digital-Compositing-Second-Edition/dp/0123706386)
(b) Hulfish’s classic: Art and Technique of Digital Color Correction (http://www.amazon.com/Technique-Digital-Correction-Second-Edition/dp/024081715X/ref=sr_1_sc_1?s=books&ie=UTF8&qid=1334269842&sr=1-1-spell)
© Harkman’s color correction handbook ( http://www.amazon.com/Color-Correction-Handbook-Professional-Techniques/dp/0321713117/ref=sr_1_1?s=books&ie=UTF8&qid=1334269770&sr=1-1)

Thanks a lot. I’ll definitely check those out.

Thanks Blendercomp, it would be more useful if the ramp had some input ports to set colors. Could not crack that concept sorry.

And looking at my ramp I think that I sampled the wrong white, should have selected one a bit more blue. And using the ramp lets you slide the mid point, very cool (even make multiple mid colors).

EDIT:

Bang, that was it. I sampled the wrong point, duh.


BTW this is more a second pass Grade effect. Color Correction usually refers to equalizing your images so that they have the same starting point.

:o …!

Roger’s book has an excellent treatment of the subject, and there are plenty of web sites out there. For example, a Google search on “Vectorscope” produced http://www.kenstone.net/fcp_homepage/fcp_7_scopes_vectorscope_stone.html and that’s about as good as it gets, I think.

The whole idea is simply that you quantify what your eyes “see.” In case your eyes are like mine are getting to be. (Aging sucks… :mad: ) You let the computer tell you numerical facts about “that particularly large data set” that you call “a picture.” Your eyes and your emotions are always going to give you excellent impressions, and intuitions, about how “good” the picture is, but we also know (for example) that our impressions are influenced by the last picture we looked at. (The grass is greener, on either side of the fence, if you’ve been lately staring at the blue sky.)

When you look at a picture, you look at the picture … not at the light. (There is of course an old saw that presents a photo of a :eek: buxom :eek: lass and says, “Draw the approximate color response curve of this image.” Or something like that. Not gonna happen for a human male.) But “a radial scatter-plot of a computer data set,” which is actually what a vectorscope display is, is an objective thing that your mind will regard in a different way. Ditto a histogram. This point-of-view encourages your mind to regard what it is dealing with, not as “a picture,” but as a numeric data set.

I’m not sure what you mean by this. I mean, it is possible to set control points and use different ranges of color with a ramp node. You can e.g. have three ranges, with say yellow for blacks, green for mids and orange for whites.

I agree. That’s why in my proposed setup I have included waveforms so that one could easily match blacks and whites first, then balance the colors and finally change the hues of blacks/mids/whites.

I just meant that it would be nice to route a color sample from a source to the ramp point. Instead of eye droppering from an image, I tried to create a rounded min and max rgb value but no luck so ramp was plan b.

Also I typically grade with parade and vector scope but chose sample line for rgb histogram on the same part of the image. Easier comparison.

Considering that the ramp denotes range, a way of doing this would to decompose an image into RGB (or YCbCr), pass each channel through a ramp node, and then recombine the signal. Then using min or max math nodes should probably work (haven’t tested this though).
However, I’m not convinced that this auto-levels tonal spread approach would be optimal. I find it very hard to believe that no manual intervention will be called for. Maybe it could somehow work for automatic adjustments with shots which are edited at a very high pace.

I’m also wondering how other apps handle this. I know that in most apps you could save certain looks (e.g. AE) and apply them but isn’t manual tweaking always required?