Filmic is on EVERYTHING in the compositor (not just render layers)

Really, filmic was just implomented entirely the wrong way. It could have been a compositing node. That would have worked.

Also I’ve just realised that something has been removed since 2.79 … there was that long list of different cameras that you could impersonate the look of… what happened to that?

1 Like

Probably too many cameras on the market now.

These ?


Cheers, tinker! That’s exactly what I was looking for!


Wait… what? I thought Filmic is for rendering final images, no further processing required. If you want to do further grade or compositing you export EXR files. They are linear and don’t attract Filmic processing.
You only get Filmic on the non-linear file formats. Yes its applied to the display as a display transfer function for the operator to see the wide gamut of colours. I guess the issue then is that other software applies a different (wrong) transfer function?

1 Like

Filmic is only good as a final image if you don’t need to do compositing with a software outside Blender. And even in Blender, because it’s applied to everything, even imported images, it makes it completely useless.

1 Like

I don’t want to export exr sequences. I’m not made of hard-drive.

I would like to be able to composite onto video without changing the colours of the video within blender’s powerful compositor, in the same pass as the 3d.

Id like to put CGI ontop of white without having to do it in another program after this.

If filmic was a compositing node I could do all this.

1 Like

But its a screen correction, it won’t be applied to EXR exports. It only gets applied to file formats that are typically used for display not post. Also if you choose the Standard View Transform and Look = None, then you send the uncorrected image straight to those formats. But files like jpeg are not designed to accomodate the image and indeed a baked in transform ruins the colours anyway.
I’m a bit confused by the problem. Shouldn’t you be using a compressed linear EXR to composite outside of Blender?

Yes, EXR will not save with the filmic lut baked in, as opposed to the other format (if you turn on the save as render option). Now, if you want to do compositing in Blender and you import a plate, Blender will apply the filmic look to your plate, which is a no no. If you set your color management to standard, your plate will be OK but your EXR will not have the filmic look and you can’t apply the filmic LUT only on the EXR. So you simply can’t comp in Blender if you use a plate or any images that are not 3d rendered. If you do to Nuke, Natron or Resolve, you can get the filmic look but because filmic is a transform and not a color space, like ACES, you will lose all the color range because everything will be brought back from 0 to 1. I made a clip about it. and one on ACES


No. This is false. You can change the tagged colour space in the image editor and other places, and Blender will respect it last time it was tested.

Also false. An additive RGB colourspace is defined by three facets:

  1. A set of primaries.
  2. A white point.
  3. Transfer function(s).

Filmic qualifies as a fully fledged colourspace given it uses BT.709 primaries, D65 white, and the normalized log2 encoding for the base log, with a set of contrast LUTs.

This is getting old. ACES is literally nothing more than a curve and a set of primaries. It isn’t magic, and gosh does it have mountains of problems. It is a far, far, distant yell into a void given it does not qualify as a colour management system.

I know little about color management so what is a good solution for this?
What would be the “correct” way to set something like this up?

Is the following “correct”:

  • Set Color management in Properties Editor to “Filmic”
  • Place plate (for example a png file ) in Compositor
  • Set plates color space to sRGB in the plates image node
  • Place exr sequence in Compositor
  • Set exr sequence color space to “linear”
  • composite them together somehow
  • use “file output” node and write the composited to a png sequence

What if you just render with filmic (exr so that it doesn’t save any profiles) and import it into compositing software as if it were ACES? How would that look?

Please attach a blend file with a render in filmic and a plate where the plate will not have filmic applied to it. It doesn’t matter if you set your plate to sRGB, Blender will apply filmic on it anyways.

Filmic qualifies as a fully fledged colourspace given it uses BT.709 primaries, D65 white, and the normalized log2 encoding for the base log, with a set of contrast LUTs.

I glad it has a wide gamut under the hood. Now, please provide a Nuke, Resolve and Natron a file with rendering in Filmic that can still has the full dynamic range and will not be brought back to 0 to 1.

This is getting old. ACES is literally nothing more than a curve and a set of primaries. It isn’t magic, and gosh does it have mountains of problems. It is a far, far, distant yell into a void given it does not qualify as a colour management system .

Maybe you’re right but this is what we use in the VFX film industry and it works. It works in Blender, Nuke, Resolve, Flame etc. ACES is also the only way to get footage from professional cameras to look the same. We get footage from Alexa or Red or Sony and we convert them into ACES so that they are all in the same colorspace. You get your footage, convert it into ACES, do your CG in ACES cg, comp in ACES (everybody is in the same happy family) and we output in whatever we want (rec 709, rec 2020 etc). This is not something we can do with Filmic. And that’s why I say that it’s time to say goodby to Filmic because it’s too limited. Filmic only works in Blender. Outside, it’s pretty much useless (unless you prove me wrong by sending me the requested files) :slight_smile:

Now please understand that I work in the high end film industry. If you do your personal projects or you do stuff that stays in Blender, happy filmic. But if you work on Hollywood films, like we do, you’re certainly not going to use Filmic.

What if you just render with filmic (exr so that it doesn’t save any profiles) and import it into compositing software as if it were ACES? How would that look?

EXR don’t have the filmic info in it so you would have to use a PNG file but then it will be reduced to sRGB and the values brought back from 0 to 1, which makes it pointless in compositing.

If the “this” is combining a plate that is a fully formed image with something that is essentially an open domain stimulus encoding like that coming out of a render, the bad news is that it’s impossible. With a hack / caveat.

If the “this” is not what is described above… skip this next pile of garbage I’m about to spew out!


It can be useful to think of the render as emulating light transport, even though it isn’t. It’s sort of a hack that leads to an open domain range of stimulus, from zero to infinity. The relationship in terms of intensities between the values can be considered uniform with respect to the energy principles built into the BxDFs etc., plus or minus all of the hacks in there!

Now consider a fully formed image. Someone typically has shifted things in extremely non-uniform ways with respect to the above “energy” ground truth. In math terms, we could say that the magic of forming an image from some stimulus is non-commutative, and by the end of such a process, it’s a likely irreversible process. Creative negative film, unsurprisingly, had this “problem” too!

So what is this hack I spoke of above?

One can “cheat” and “invert” an image such that the energy matches the output of an image formation chain. That is, if you see a fully formed image, you can say “This is sort of medium-high contrast” and invert the process of transforms to remap the image 1:1 in terms of energy. From there, it’s a simple composite and then run it back through the forward process.

There are probably a dozen folks here who have asked publicly about it, and assembling the stanza is pretty painless for those who would like to. Loosely it would be:

  1. The contrast LUT. Pick the appropriate contrast that loosely matches the formed image. This will create a “compressed” looking buffer. Log-like.
  2. Convert from Filmic Base Log to linear.

If someone wanted to get fancy, they could try the OCIO V2 inversion of the chroma compression too.

The above requires a custom stanza that I chose not to include a decade ago, for a long list of reasons. A diligent person with a text editor can achieve success with not a tremendous amount of work editing the OCIO configuration, and even expand it to include their own looks and flourishes with some of the other valuable posts in this forum.

False — Blender encodes what you’ve chosen via the Display, View, and Look to all image encodings.

Blender, again due to design decisions that go back a long ways, will always encode the Display and View and Look chosen. This extends way beyond Filmic, and is a pretty unfortunate decision. On the other hand, if the proper management for encoding were provided, given the general audience struggles dearly with some of these concepts, there would likely be cries of bloody murder.

That is, if you download my configuration, Blender would also bake the Display P3 transform in, or BT.1886. This is sort of a design problem in a larger scope.

The TL;DR is that when someone encodes to an image encoding, in the vast majority of cases they are wanting a fully formed image. In your case, you actually want the stimulus encoding.

Within the design constraints of Blender, there is one sole option: EXR. And even within that, it’s an unfortunate design. Further, EXR is also the only format that encodes alpha, so there’s that too!

I think you are confused.

A display is the limiting factor. There’s nothing you, or I, or anyone else can do about this. If you are encoding an image, the display will form all of the open domain stimulus into an image, whether you like it or not.

This is not a trivial problem to solve, and to date, arguably zero software has achieved this successfully. One can argue that Baselight manages to negotiate this landmine laden territory, but not a lot of folks have access to it.

Natron is pure garbage, so I can’t speak to how it handles things beyond “Don’t trust it.” Nuke can achieve things easily via an OpenColorIO node. Resolve is a problem because DCTLs are only supported in their “Studio” version, and using LUTs to convert from open domain stimulus to the closed domain that Resolve expects is less than ideal. Simply loading Filmic Base Log into Resolve has and does work successfully. This is exactly how quite a few projects including Agent, and several features, shorts, and commercials were achieved. The contrast LUTs coupled with a log-like encoding as an entry point is exactly how Resolve has been designed from the earliest film work.

It doesn’t. Not in the way you think. Don’t believe me? Go listen to the folks who run pipelines and the TDs who struggle to undo everything ACES and use it purely for encoding. No one uses the output because it simply doesn’t work. Never has.

The industry runs on Arri LogC and Arri Wide Gamut. I reckon the percentage of projects that don’t is so insignificant to be a statistical error.

I believe Chris Brejon has accumulated more than a few quotes from some of the heaviest hitters in post houses that support that claim.

It’s used, if at all, as an encoding intermediate stage, and everyone is saddled with working around the broken mess.

Just saying that what you believe ACES to be is largely smoke and mirrors. Ask any experienced pipeline tech or compositor with credits and you’ll find that the pipeline is not ACES per se, but selective encoding pieces. That’s it!

You realize Filmic ships with Renderman? Seems to me that’s one of three things it ships with, and I hate to break it to you, some feature films and short films in CGI anthologies, and commercials even used it. Some folks even insisted on it.

That’s only to say that you understanding of post production is perhaps a tad myopic. Again, how do you think the vast majority of productions orient around Arri LogC and Arri Wide Gamut if what you said were a priori fact?

I have no idea what Hollywood does though…

For someone who professes to be all pro, I’d suggest that you probably understand why log encodings dominate?

Try this… take your render and encode to 16 bit TIFF using Filmic Log Base Encoding. Now load it in another instance and set the encoding to FLEB. I wonder why it was included…


Perhaps I could chime in here and add a bit from the slightly different perspective of someone with color grading experience and who has interacted with other professional colorists (and who, for better or worse who lives and works in Hollywood).

ACES as a final step in the color pipeline isn’t really used as widely as most CG and VFX folks seem to think. My good friend who is one of the top colorists in the world doesn’t use ACES. Unless the client specifically demands using ACES, most of the higher end colorists whom I have spoken with prefer not to grade in it. Also, ACES ODT’s are not a one-click magical solution to convert a grade to anything and everything out there; care must still be taken to manually adjust the grade for various delivery formats which negates one of ACES’ main selling points.

From a colorist’s perspective, ACES just feels weird, the wheels respond unnaturally, and overall it seems like it’s fighting you every step of the way. ACEScct is a slight improvement but still far from feeling like a natural extension of the controls.

Starting with Resolve 17, Blackmagic (perhaps sensing discontent with ACES) has come up with its own new internal color management called DaVinci YRBG Color Managed which actually works quite well, and it’s become my personal favorite way of grading. The problem of course is that it’s internal and proprietary to Resolve. So unless Blackmagic plans to open up the format or at the very least offer the transforms to the OCIO foundation, it’s not quite a replacement for ACES.

Anyhoo…just wanted to throw in my $.02 regarding ACES. I feel that there is a lot of mis-information about it, and some of it unfortunately even coming from professionals in the industry.

Please continue this thread, it’s one of the most fascinating topics that I’ve followed.

1 Like

Actually, no… sorry tinker… that list controls things like sensor size and view angle… what I was after was a list of different colour spaces so blender could impersonate different camera’s colour.

Troy, you are very good at writing technical information but you offer no solution. If you were working for me and I asked you for a Nuke file with and EXR that looks exactly like it did in Blender with the full color range, and you give me that answer, I would simply respond to you that I don’t care. I want results, not a Siggraph paper. If you can’t do it, for any technical reason, then you proved my point that filmic is useless.

Our pipeline TD set us up in ACES. (The guy is a genius but the way. I never work with such a talented guy). It’s awesome! We get Red Log footage, we linear it in ACES, our renders are in ACES, Nuke is in ACES, so is Houdini. Everything WORKS. What I see in Blender is exactly what I see in Nuke or Resolve. The client wanted Red Log back, we gave him Red Log for the output so that the colourist will have matching images between the non-vfx shots and the VFX shots. This is not theory, this is real life application. We don’t have any issues with ACES.

The beauty of ACES is that it allows us to output to any format they client wants. So if the client’s colorist doesn’t want to work in ACES, we’ll give him whatever he needs. You want Rec2020? No problem. ACES is the in-between format.

Show me the money! Prove me that filmic has it’s place in the industry. I want a Nuke file. Just that. A Nuke file and a Blender rendering in EXR. No more theory. Show me the money!


A token of appreciation for all the theory and the experience provided in this here thread! Am following with great interest. I used to have no knowledge of color encoding, now I feel like I have a clue what is happening.