Disabling filmic for background footage?

Hi there,
so basically what I want is to turn off the LUT (filmic in this case) for the compositor and use it only for rendering. Because when I want to combine real footage with 3D, the LUT gets applied to both - but obviously I don’t want it in the footage but only in the 3D elements. Is there any way to do this?

greetings,
Stuntkoala

I wondered about that as well, thinking “Why not an option to have Filmic as a node as well, so you can turn it off in the colormanagement” But with natron it’s possible , there you can load filmic in a some nodes. Maybe Nuke and Fusion as well.

I wondered about that as well, thinking “Why not an option to have Filmic as a node as well, so you can turn it off in the colormanagement” But with natron it’s possible , there you can load filmic in a some nodes. Maybe Nuke and Fusion as well.

Blender isn’t able to do that yet.

I was afraid it isn’t. Well, then I guess I will stick to using linear space and converting it afterwards. This would be nice to have though in the future :slight_smile:

You actually want to properly decode footage into the scene referred domain. Then it will merge seamlessly.

May I ask is this is the “usual” way to tackle this problem or it’s a simple suggestion to make it work for now? I mean, is this “exlude-footage-from-color-management” a feature Blender is lacking like many other or there’s a workflow reason behind?

Surely I can hack the problem by using a different blendfile for the compositing (decoding the footage is currently beyond my capabilities), but looks way easier to tweak things (light, shaders,…) having the original footage behind.

Not quite sure what you are asking here, but I will say that you aren’t excluding footage from colour management. On the contrary, you are including it properly.

But perhaps it is my misunderstanding your question?

Sorry for being not clear. Maybe I’m not understanding the whole picture, I’m approaching the whole thing for the first time.

I would like to know if other compositing softwares usually have options to decode a footage before or they simply bypass the step and apply the colour management to the rendered frames only leaving the footage “as it is”?

It isn’t you not being clear; we just have different contexts and that can make communication tricky at first. Terms are another big problem, which is why I try to stick to well defined terminology. That might be tricky for some as well, but in the end, we all end up better off when we use better terminology.

There is much here to unwrap. So I will try to tackle it point by point to prevent assumptions.

other compositing softwares

Just like file formats, not all software is equipped to handle things in a contemporary way. While scene referred workflows are possible in Nuke, Houdini, Fusion, and possibly others, the nature of the colour means that it is up to the pixel crafter to make sure their work is correct. That is, the software that can do scene referred workflows, don’t enforce it because it is impossible to know what their data means and what they are trying to do. Photoshop, After Effects, and other software cannot properly handle scene referred workflows due to legacy historical issues among other reasons.

usually have options to decode a footage before

Once you realize that all we are dealing with are numbers, you can see that we must always deal with the data before. Some software might make certain assumptions about the data in secret (think about Blender and “default” for example, or what “sRGB” means, etc.) but that ends up making for dumb software that is rigid and breaks. Better software permits the informed pixel pushers to exert full control on their data inputs. In the above post, “decode” was used to describe colour management; it is a series of transforms that happen behind-the-scenes.

In the case of outputs, similar issues exist. Raytracing engines, or anything dealing with photographic / physically plausible scenes need to have strict control over the creative and technical outputs. This ranges from everything you see on your screen, to how the data ends up encoded in particular files. Compare for example, Blender’s absolutely broken DPX output to the DPX output configuration in Fusion. All contemporary compositing software permits controlling the view for display output, as well as finely grained file format output.

bypass the step and apply the colour management to the rendered frames only leaving the footage “as it is”?"

It cannot be bypassed.

Even the most simple example of taking a set of data to your display is complex under the hood. Think about rendering for example, in the case of a raytracing engine.

In a raytracing engine, we aren’t shooting a single beam of light from the screen into the scene, but rather three. Each of the reddish, greenish, and blueish lights are a specific colour in colour science terms. What colours are they? Are the other data sets in our scene aligned to them?

Now think about the return trip out. What are the lights in the display? Are they the same lights of the reference scene? Further, did we map the light intensities in a way that is creatively acceptable?

Consider that every single Apple product since 2015 now has different coloured lights in their respective displays as compared with the assumptions made in your software. Were the lights transformed correctly so the colour of the three lights align so the creator / audience has a chance at WYSIWYG?

All of this ends up with the simple fact that smart software that tries to make assumptions for the pixel pushers is stupid software; it is rigid and breaks, crippling the pixel pushers.

When someone asks about the “Why bother?” the answer is that proper process will:

  • Enable the pixel pusher to have control over their work
  • Save time
  • Save (in some cases) money
  • Elevate the work
  • Open up new creative options

TL;DR: Good software permits pixel pushers to control their inputs and outputs with granularity. It is up to the pixel pushers to embrace that and expand their knowledge to control the situation and avoid making rubbish.

1 Like

I was also wondering about a partial color pipeline recently.

My final output is usually print, with white backgrounds and soft shadows grounding the objects. I usually just set the renderer to slightly overexpose the groundplane, so only the shadowed areas are below the visible threshold. I like the visual effect of the Filmic mode, but it does its best to avoid overexposure, and it makes it impossible to get white.

No problem I think, I’ll just drop it into the compositor and adjust the brightness to make the background white. But since Filmic applies to the whole pipeline, there was no way to get to white without destroying the image.

Troy, do you have any advice for this situation? I’ve been working without Filmic up to this point, but it would be nice to add to my repertoire.

With print, be sure to proof against the proper ICC that represents the Filmic output. That is, Filmic would be proofed against a pure 2.2 power function output, with REC.709 primaries.

Elle Stone has provided a version 2 ICC profile with those attributes, as well as a version 4 ICC profile.

These would be the ICC for your rendered image. You would need your printer’s ICC for the ink / paper combination to complete the loop and make your colour management chain 100% tight.

While I don’t know the context of your work, I would actually advise against this approach. Instead, I would recommend a typical photographic pipeline, and “shoot” your shot with near final aesthetic, with a little bit of room regarding exposure and colour. Then, grade the image to push the display referred white to your final value, as well as any brand identity colours as required.

For display referred white, it is rather easy to get to a luminance matte that will mask off the highly luminous region, and use it as a grade area. Sadly, the luminance matte in Blender is broken via wrong coefficients[1]. Instead, use the RGB to BW node, colour ramp / bend the values, and use as the factor for a CDL node[2]. While this video is for Nuke, the same principles apply. To do a “fulcrum” CDL manipulation, that is, have the values hinge around a particular value such as middle grey (0.18 in the scene referred domain under Filmic) or skies (much higher), simply put a divide and multiply node on either side of your CDL. Divide by the value you want to fulcrum / pivot around (0.18 for middle grey, or whatever sky value), roll through the CDL and adjust the power etc., then multiply by whatever value you divided by.

As always, be very careful with your albedo values! An albedo that is too high relative to a physically plausible texture will throw back too much light and trigger photographic burn out with Filmic. Expose your shot for the important contexts, and grade to final.

If you need further assistance, don’t be afraid to offer up some specifics and I’ll do my best to find a solution.

Hopefully the above pipeline will help you out.

With respect,
TJS

[1] Without the support of pixel pushers such as yourself urging the developers to fix these things, they will go unfixed, despite the fact that the patch has existed for a long, long, long time. It isn’t even large, but instead we get caught in this goofy loop of stalling as they don’t see it as important without folks shouting.
[2] Blender is in dire need of advanced grading tools that are completely colour managed. The more folks experiment with grading, the more it will help to raise awareness of the shortcomings and force development attention at it. The solutions are quite straightforward, but again, without a cultural push from the keen pixels pushers, none of this will happen.

I’m gonna need some time to process everything you’ve just said, but I certainly appreciate you pointing me in the right direction. While some of the specifics are over my head at this point, It really helps me to understand the breadth of tools available for managing colors.

Most of my work is in commercial displays and trade show booths. I need to visually communicate the differences between white laminate, white paint, and white acrylic, all on a white background. So, as you can imagine, I have spent some time pulling my hair out trying to get the colors to render out in a clear way. I usually just end up adjusting my materials per project to fine tune the look. But figuring out more of the art of color grading seems like it would really help me to produce more consistent renders. The idea of matching spot colors (like clients logos) is very appealing. Making their logo look right makes clients happy.

Thanks again for the detailed answer!

Sterling.

good news: https://developer.blender.org/rB0e59f2b256fbf81145fd26f9f37f46b07c9e54b4

you could just render the 3d elements with the filmic colour management then composite the images after with colour management turned off