Reversing color correction.

I work on a number of TV shows, most shot on the Alexa and a few shot on the RED. The footage comes to me in some form of gamma color space and I need to color correct the footage to linearize it. Basically this involves setting new white and black points and adjusting the gamma. This increases the contrast and makes it look similar to what the final color will be. What’s important is that I am not the colorist and can’t know the future–that is to say, what color choices the producers will land on. Therefore, I need to work with a neutral color correction of my own design, (because one should never do color operations on gamma or log color space images) and then pass my finished, approved comp back to editorial, matching the exact color of the original footage.

In Nuke, there is a “reverse” checkbox in the grade node which makes this simple. It is there because this is a common practice. Reversing a color correction doesn’t seem to exist in Blender’s compositor and it looks like I will have to manufacture it myself with a custom node group using math (which I can do but am not the best with) and would welcome any help.

I was told by Ton himself that the nodes have an input color space to do color management, but after some investigation, this doesn’t work in common VFX compositing workflows. The LUTS used by the DIT on set do not travel with the prores 4444 Quicktime files I often get. In other words, the Quicktime was recorded in log color space but loads as sRGB. This cannot be avoided. If using a RED camera, blender doesn’t even support .R3D files (AFAIK), which can embed LUTs, so the .R3D files must be converted to a file Blender can read and will lose the color correction they were looking at on set. (Unless I convert them with the LUT applied in Red CineX as float .exr files). Not to mention, I want control over how the file looks when linearized. I don’t wan’t Blander making that decision for me.

Here is the pipeline:

  1. Receive the footage from the client and import it into the comp.

  2. Linearize it. That is to say apply a color correction taking it from milky, log color to higher contrast, proper white and black levels.

  3. Execute the comp. This is where you would put in your CG elements, key your green screens, do color corrections to elements that are not your BG plate, etc.

  4. Reverse the color correction of your entire comp to match the original look of your BG footage in step 1.

  5. Return comp with the original milky, gamma color to client to be cut into the edit and eventually colored in DaVinci or whatever grading software they are using.

Final note: It is critical that the entire color pipeline be executed in 32 bit float. Otherwise you will clip your whites and blacks at step 2 and never get back to the original.

There are more knowledgeable users who can give better replies to the other questions, but I believe Blender does (or did) support R3D. You would need to compile with WITH_BF_REDCODE enabled as it is off by default.

Thanks for the reply Organic. Hopefully one of those more knowledgeable users will chime in here. As far as Direct R3D support goes, that is vaguely interesting but I’m really more interested in a better color pipeline in general with specific emphasis on all I wrote up at the top there.

I find software that handles R3D directly outside of RED CineX, especially with a Red Rocket, to not really do a very good job at debayering the .R3D. After Effects and Nuke both debayer differently and both do it somewhat poorly compared to results from software published by the camera manufacturer. I would think Blender, even if it did directly support R3D, would be about the same (and I’m kind of okay with that). I think there are bigger Blender fish to fry…like my original post above. :wink:

Here’s my 2 cents about Alexa:
Those ProRes files are encoded using so called LogC LUT. The best way to get the colors that was seen on set is to pass it through this LUT and I’d recommend doing it in DaVinci Resolve Lite. This software is available for free.
If you want to convert it in Blender, there’s no easy way to do it, but it’s not impossible.
I did some day come across the math behind LogC LUT, but now I can’t remember the link to it.
If you find the formula - I’m sure this can be replicated using nodes, but I wouldn’t bother as there is a better solution that is as free as Blender :slight_smile:

If you happen to download DaVinci Resolve - you can also use it to convert RED footage. There is an option to use camera metadata and apply them to your footage.
I’m working a lot with both Alexa and RED stuff and always use Resolve to encode it for edit. Then in most cases I anyway don’t use original files as they are color graded after editing and exported as DPX files.

Pre-color correct? Ouch! Bad idea for me Bartek. (By the way, I’m a huge fan of your tutorials.) Anyway, I do not want to pre-color correct in DaVinci or anything else for that matter. (I have DaVinci, by the way, along with a lovely set of control surfaces sitting right in front of me as I type this message to you). Pre-colorcorrecting is a bad practice in the world of Hollywood VFX. Since there are a lot of collaborators, sometimes at different facilities, you really want to pass the color you got in your plates, shot by the very highly paid DP, back to editorial looking exactly the same as when you got them, so the colorist downstream, working with the show runner and producers and director can make color decisions there. This requires a 32bit, floating point color pipeline from your (commonly) 10 bit (but sometimes 12 bit) plates, through the comp, back to your deliverable format which is typically 10 or 12 bit logarithmically encoded color.

This is one of those Blender issues that impedes its adoption by VFX facilities working on big movies or TV shows, which is how I make my living and have made my living for 20 years–I am pretty new to Blender, but definitely not VFX. I could write papers on color spaces, color correction, and just about any part of compositing…just not in Blender. :wink:

I’m only telling you this so you know I am not some fresh faced, young kid with a lot of passion but not much experience. And I’m asking for a software feature that already exists in Nuke, Shake, AE, even Combustion…before Autodesk killed it to save their precious Flame systems. (Idiots. They could have owned desktop compositing. Adobe are also idiots. After Effects does not have a 32 bit end to end color pipeline. That’s a show stopper.)

What I, and the rest of post production in “Hollywood” wants, is a color pipeline that gives a VFX artist or facility a plate or plates and gets back VFX representing exactly what the DP shot in the format specified. After finishing a comp, when you do a difference matte between the original footage and the comp, you should really only see the effect that got comped in. You should’t even see noise where the background plate is visible in the comp.

To do this requires only two things: 1) A color correction node with the ability to set the white point, the black point, and to adjust the gamma, all numerically. Even photoshop can do this with the levels tool (though the values are represented in 8 bit, even when your loaded image is a higher bit depth). And if you wanted to get fancy in a Blender implementation, there would be a soft clip function to generate s-curves, but I’d settle for doing without. Then, 2) the ability to reverse that color correction so if you looked at the node tree in the comp and looked at the original footage and then a viewer connected to the reversed color correction, they would look identical.

It would not matter at all what you did between those nodes, just so long as you didn’t clamp any values in float at 1 or 0. I hope that makes what i am asking for clearer.

And thanks again for taking a look at this Bartek. I have a lot of respect for you and really have learned a lot about how Blender’s compositor works from the tutorials you have posted. I’m trying to get a feel for just how interested the Blender community and developers are in getting Blender adopted by more professional film and TV production.

I often post issues like this only to be met with the attitude that all the VFX professionals and facilities I know should change their workflows (which are time tested) and Blender is just fine. Sometimes that gets frustrating, especially when I am actually using Blender right now on movies and TV shows and would like to see more Blender adoption.

I might add, Blender is fully capable of doing color correction like this. In the compositor add input/image and select any old image. Then put in two Bright/Contrast nodes. Set the first to 100. Set the second to -100.

Then view what is happening. First take a look at the input image. Then view the first Bright/Contrast node and watch your image turn pure white. Then look at the second node and note how it looks identical to the original.

Do that, but just with proper color correction nodes that can set the white point, black point and gamma. And then put a reverse button on it so I can copy the first color corrector and paste it at the end of my comp and reverse it.

Done.

P.S. I’d prefer that the values go from -1 to 1, and just make all the values in the software regarding color floating point representations. If converting from 8 bit to floating point, 255 becomes 1. I have no idea what RGB value “100” represents. And if the values must be represented 1-100, why stop them at 100? Seems like an unnecessary limitation. I just did a gain operation in Nuke where I set the value to 1 million and reversed it and the image still came back to it’s original color, though the color picker only shows to 100,000th levels of accuracy. Still, there were no rounding errors.

One more thing Bartek. If you need to convert R3D files, just use REDCINE-X. It too is free and debayers better than any 3rd party converter.

https://www.red.com/downloads/4fad7fd417ef027cd8000eb1

All these apps and more can load R3D files: DaVinci (which you mentioned), Nuke, After Effects, Final Cut Pro X, Premiere and many, many others. None look as good as the debayering from REDCINE-X, especially on a machine with a RED Rocket card. Truth be told, I almost never have to deal with RED footage. The RED camera is kind of not so great as regards exposure latitude, and their solution to skim the sensor twice to make a higher dynamic range image to increase the exposure latitude looks just bad.

I’ve never had a problem converting R3D files. My problem is only in Blender’s color management. Without a tool or button to reverse a color corrector, when dealing with log footage, which all cameras shoot these days, simply leaves Blender dead in the water for high end VFX compositing.

I love Blender. I’m hoping I can ally myself with someone who both gets this issue and has an interest in fixing it.

I admit, I didn’t think I was talking to experienced VFX guy. That’s why I simply said that it’s possible to convert footage to some other format applying LUTs using DaVinci.
I’m working on commercials and the workflow that we use all the time is as follows:

  1. Convert footage to something that can easily be edited. At this stage I want to apply correct LUT, so that editor and Director see what they saw on set. In many cases, like when working with Alexa, I use DaVinci for that. Editing suites like Avid don’t have option to apply LUT.
  2. Edit. Here the converted footage is used.
  3. Color grade basing on EDL. At this stage original footage is used.
  4. Add all the VFX and 3d stuff. At this stage we no longer work on original footage, but on DPX files that come from color grading software.

I don’t use REDCINE-X because it’s much slower that DaVinci. What I get from DaVinci (no CC, just LUT) is in 99% cases enough for editing.

I’ll try to dive into what you’re looking for, maybe I’ll find the solution. First I need to really understand what you mean. I’ll go through your posts again and get back here shortly.

Cheers

Hey again Bartek,

I too have done my share of commercials. In one year, I think 1996 or 1997 I visual effects supervised two Superbowl TV spots. These days I avoid commercials like the plague. The money is good, but the capricious ad execs tend to make me crazy.

In episodic TV, where I currently do a lot of work, we typically apply a neutral “one light” grade for the editors and conform using the original un-graded material in VFX and then also in color using, of course, exported XML EDLs. Commercials, film & TV, and music videos all have different approaches as regards color. Music videos are the worst. Often and usually, they precolor correct everything, putting in grads and vignettes and crazy color looks and then pass those plate to VFX and ask, “can you now add the hoard of dinosaurs?” Stupid. This is why I avoid any call when somebody asks me to work on their music video, though I have acted as DP on a couple now.

Commercials are also a bit loosey goosey. Commercials and music videos have one thing in common in that they are both treated as a bit of a one-off. So you can just make up a mini-pipeline for the one project and make a new mini-pipeline for the next.

A TV show may run for years and you should have a methodology that is standardized and fairly future proof. A feature may take years to complete and has a lot of media to track. It too needs a methodical, rigid system.

To me, VFX is nothing more than a tool in a filmmaking toolbox and that tool should work well with the other tools. Sadly, VFX software is typically not made by filmmakers but by computer programmers who happen to be interested in film. My dream is to turn those programmers and VFX artists into actual filmmakers first and foremost. I’d like to see the entiretity of the VFX industry stop calling themselves VFX artists and start calling themselves filmmakers who do VFX.

If anyone here was looking for more Blender adoption in film and TV, the question they should be asking is, “How can we make Blender work better in preexisting film and TV pipelines?” Instead, the early adopters (like myself) are forced to shoehorn Blender’s inadequacies into preexisting, tried and true best practices, or just use other software, both of which I do daily.

I chose this issue of reversible color correction, because with at least that in place, I could start comping shots all in Blender’s compositor on big TV shows. And then I can start shouting from the rooftops, “Hey everybody, I did this whole show in Blender!”

So basically you don’t really want to ‘linearize’ the footage but just color correct it before compositing and then simply revert the same grading before render output. like so:


AFAIK there’s no easy and mathematically correct way to do this in blender, except with the gamma node which can be reversed, but well, there you can only adjust gamma. I’d love to see a feature like that implemented in the RGB curves node; be it like in nuke with a ‘reverse’-checkbox or perhaps make the factor-slider accept negative values, so factor -1 means reverse-grade (at the moment values outside 0-1 don’t do anything).

Blender also lacks in methods to properly linearize footage, i.e. applying LUTs. The basics are there but I don’t see a way to define or import a custom LUT to the Input Color Space Menu.

here the panel in nuke:


Technically, linearization is just a color correction (bringing down the mids and setting the white and black point, and sometimes with a soft clip), but yes. You exactly hit the nail on the head Amatola. EXACTLY! Very nicely illustrated in your screen captures.

Meanwhile, as a historical note, a LUT isn’t even really a look up table any more. In the olden days we would make tables so the computers, which were really slow, like my Amiga 2000 with a 33MHz processor and 16 MB of RAM, could convert from 10 bit log to 12 bit lin in a reasonable amount of time without having to do all that math. Those computers could just “look up” the intended output value based on the incoming value. The film scanners were all 12 bit, but hard drive were expensive and so to do some tricky data size compression, the 12 bit data was reduced to 10 bits in log color space to put the emphasis of where most of the color data was kept in accordance to where the human eye was most sensitive.

Thus your standard Kodak lin to log and log to lin LUT, and it looked something like this:

10 bit log input - 16 bit lin output

0 - 0
1 - 0
2 - 0

95 - 0 (Blacks clipped at 95)
96 - 1
97 - 1
98 - 2
99 - 2
98 - 3
99 - 3
100 - 3

295 - 20,000 (a guess, but darker than the linear mid point of 32,766)

682 - 65533
683 - 65534 (note the quantization in the outgoing values)
684 - 65534
685 - 65535 (Whites clipped here)

1023 - 65535

(I’m putting in rough, but close numbers here. The inexactness of my 16 bit output values should not confuse anyone. The actual, real world values will vary from mine.)

That is a look up table, or LUT. Now, in 2013, we just brute force it through the CPU as a math operation (color correction). The word LUT is a holdover from a bygone day and why you keep seeing me simply referring to it as a color correction. (Hearing the term “LUT” to me is like when I hear someone ask me to give them a “ring”. Our phones haven’t had actual bells in them for decades and yet the phrase “ring me” is still in use.)

What’s best is that with the advent of floating point color, there need never be any clipping when setting new white and black points.

Just so you know where I am coming from.

I may not be properly understanding the conversation as most of this is outside my experience, but I believe you can load custom LUTs.


I’m not sure where to obtain the kind of LUT you require, but it needs to be defined in the config.ocio file. In Blender/2.66/datafiles/colormanagement/

More information here:- http://wiki.blender.org/index.php/User:Sobotka/Color_Management/Calibration_and_Profiling#Edit_.22config.ocio.22_to_Integrate_the_Display_LUT

Oh, and Amatola, is that Skitch I see you using over there. :wink:

Organic, you should just stop trying to use predefined LUTs like in your screen capture and ask the developers to give all that decision making back to the user in a node that allows anyone to make their own linearization. That node you want needs to set the white point, black point and gamma.

Blender does not have this critical tool.

And a note on display LUTs. Don’t use them. EVER! representing log data on your display as linear allows you to be performing linear math operations of color correction on non-linear footage, after all, a disply LUT only affects the display, not the actual data. This is a bad, bad, bad idea.

Brightness is a linear operation, but if you move the RGB value of one pixel brighter (up) on a log plate, because the color is encoded in the RGB values non-linearly (log), you will get results that are not easily predictable. It is a bad practice.

Operations okay to do on log plates: Transforms and warps.

Operations not okay on log plates: Any color transformations, keying, roto, merging alphas, and many others.

Blender works entirely in float, as far as I can tell. So just let me take the plate in and linearize it to my liking, do my comp, reverse my linearization to restore it to original log and we are done.

Just look at the properties window of Nuke’s grade node in Amatola’s screen captures above. THAT is what Blender needs.

It helps my work to use a calibrated, profiled monitor. For my purposes it makes Blender’s render output more predictable.

If your display lut merely adjusts your blender display, that is fine. However, for years and years I’ve seen flame compositors take log footage, display it linearly using a log to lin display LUT but never actually linearize it, and then they do their color corrections on the log footage. That is just wrong. Don’t do that.

The Flame, incidentally, has controls for a display LUT. It is a bit like Nuke’s Exposure and Gamma controls for the display. They don’t actually do anything mathematically to the footage in the comp.

Yes. The display LUT only mitigates between Blender and the monitor, I never apply it to footage. I just wanted to point out it is possible to load LUTs that perform custom colour space transforms.

Having colour management at all is still a fairly recent innovation. I wonder if also taking the issue directly to the developers would get you more practical progress. Blender developers tend not to frequent the forums. #blendercoders might produce some useful feedback and results. Though if you do go that route, please also update progress here.

Apologies for chiming in here, but a little bird notified me of this thread and asked if I might be able to help out the original poster…

He does in fact want to linearize.

And the answer to your second statement is following.

The answer has already been offered above by Organic. He is 100% correct.

As noted, this is not easy if you use a codec as the decoding will likely mangle the footage. Blender for certain will, and worse, it will only load 420 versions. So assuming you are starting with a DPX, there are two things you need to assert:

  • That your working space’s internal assets are all in identical primaries.
  • That your DPXs have been exported using RedFilmLog which converts the magic sauce into Cineon format log.

If the two above statements are correct, you should be able to use the following patch to the DPX system and leverage the OCIO library against your DPXs.

http://projects.blender.org/tracker/?func=detail&atid=127&aid=34684&group_id=9

If you load the DPX, your footage should be in the secret sauce primaries of R3D and Cineon transfer curved. To linearize, it is as simple as using the default Cineon log inversion via OCIO.

Of course, you could also wrap your own group transforms into the config.ocio and generate your own transforms as necessary.

If you are in a mixed set of primaries, and assuming you are in a scene referred model, then you will need to assert that your primaries are correctly transformed by:

  • Normalize to bounded 0…1 range.
  • Convert the primaries through a matrix or 3D LUT and clamp the out of gamuts, if applicable.
  • Undo the normalization into scene referred linear.

Obviously channel crosstalk will happen if someone has done the ridiculous thing and graded footage prior to VFx work etc., so hopefully this isn’t your case. If you need to see a color accurate representation, this is quite easily tacked on, again as Organic stated above, via the display transforms. There is no shortage of misguided pipelines that mangle things by not putting grading at the tail end long after Vfx and post production work has been completed.

I can easily step you through this if needed, but it is likely the subject of an email.

I wrote up a wiki page that, while focusing on the display transform for a profiled display, will hopefully give you the basics of manipulating the config.ocio file and the corresponding LUTs and matrices. Hopefully it isn’t too out of date…

http://wiki.blender.org/index.php?title=User:Sobotka/Color_Management/Calibration_and_Profiling#Hook_the_3D_LUT_into_Blender

Hope this helps,
TJS

PS: I entirely agree that a data agnostic OCIO node would be excellent, so hopefully more folks such as yourself will voice support for such a node.