A few questions regarding the Filmic workflow

Thinking about what I said earlier about “pre-flight” in the printing process, in very many ways things like ACES are “pre-flight” for video.

You are mapping the render output (which, really, is a completely abstract thing …) to a specific hardware device or class of devices, to achieve a specific artistic and technical effect.

Aim for the moon, and who knows; you might reach the outer space, or the clouds or the ceiling of your room…

@Midphase @Voidium I also wouldn’t say we’re using it as a reference for what the OP is trying to achieve, it’s just a really good well-known example of color management at its best. We’re highlighting how proper color management can still produce a stylized, “physically accurate” result.

That’s fine, except most people aren’t willing to put in the thousands of hours needed to really truly achieve what they’re after. I am not saying this is you, but I find that most people who post around here on these types of topics are looking for a 10 minute YouTube video that tells them everything they need to know, or even better a free add-on that does all the work for them.

My friend W. who is one of the top colorists in the world has been at it for a lifetime, and every day he’s still learning more about it and sometimes even realizing that some of the things that he was doing were wrong.

I’m not saying that your average Blender artist needs to also become a professional colorist in order to get working with ACES, but it’s a bumpy ride that will require many hours or reading, watching, and testing until a proper workflow can be achieved.

Personally I think ACES for CG is in a weird transitionary period where something that should truly be transparent to the artist is not. Far too many apps are still unsure how to properly tackle different color spaces in order to ensure that the final delivery has the maximum quality available. I can tell you that, as a Resolve user, ACES is not typically what I prefer to work in when I’m color grading. It has pros and cons, but for my, and many other professional colorists around, the cons in ACES aren’t worth it except for very specific scenarios. The above-mentioned friend of mine (and yes, you have seen the films he’s graded, I assure you) actually doesn’t use ACES at all unless it’s dictated by the studio, and even then he will put up a good fight.

1 Like

I’m the kind of person who likes to know the ins and outs of things in “general”, why should I push this button and not this one ? is there a second way to do the same thing ? is there a third way ? a fourth ?..

But once in a while I come across some topics that, the more I try to get to know about them, the more confused I become and the more questions that pops up in every direction, in which case, I try to take a few steps back and look at it from another perspective with a different mindset (instead of trying to understand how things works, I use what little I already know and compare it to some kind of “ideal” reference, if it’s anywhere ~50% I take it and move on, if it’s less, it’s probably not meant for me to understand in the first place “left brain vs right brain kind of logic”).

What I know right now:

  • Use a format that stores high dynamic range to Avoid clamping during the production phase (like .exr).
  • Use Filmic, ACES or similar view transform for better color/light interaction.
  • Only use sRGB for deliverables (.png .mp4).

As for the full workflow:

Hit Render (Filmic) → apply compositor node effects (Filmic) → output .exr (16 bit) → load image sequence to VSE (Filmic log ? there is no “Filmic”) → apply color grading/correction/cut cut cut → output .mp4 → Upload to Youtube.

Did I miss something ? any mistakes ? something to improve the workflow ?

PS: btw, I’m on Linux, and most post processing apps either aren’t supported, have lots of bugs, or don’t support the OCIO config, hence why I’m sticking to 100% Blender everything.

1 Like

Let’s give OP the benefit of the doubt. They’ve shown they’re trying to understand, not just find out what buttons to press. Besides, you don’t get those thousands of hours in without starting somewhere.

I agree, every program seems to approach OCIO color management differently - some much better than others, I’d argue. I agree that it might not be worth it if you’re doing straight color grading or straight CGI work. There are plenty of other ways to skin that cat. For me, ACES really shines when you’re doing complex compositing work. My studio does our own fair share of indie- and lower-budget feature film VFX work, and it’s great to be able to pull in footage from different cameras and mix them with still photo elements, stock assets, and CG all within a consistent color pipeline.

1 Like

Jesus fuck no.

1 Like

Just to be clear, Filmic is a View Transform, I don’t think it has an actual effect on the render but I could be mistaken. I ran several tests a while back and I don’t think my EXRs in Resolve looked any different depending on whichever look transform I was using in Blender.

Regarding high dynamic range and clamping, realize that the dynamic range afforded by delivery formats will restrict you, particularly as most delivery formats are 8bit. Having good lighting skills helps, so that you can balance your lighting in your shots and keep everything from being overly bright or overly dark. I do wish that Blender and other render engines would implement some sort of scopes and histograms along with the rendered image, it would be extremely useful in determining any problem areas.

Regarding sRGB for delivery, once again that is something that is decided wherever you do your final final output (which I suppose could be from Blender). sRGB is fine for YouTube/web, but if you go to BluRay or TV streaming I would choose rec709.

1 Like

From what little info I gathered, contrary to other file formats, .exr require you reapply the view transform in whichever host application you are opening it in, it does not “bake” the look.

What I learned also is that upon saving for the final deliverable, the HDR values (0 to infinite) will be mapped into the SDR (0 to 1) output through a transfer function (the river in the canyon in my very first post).

The fact that we don’t know how each app calculates things like overlays, converting things to other things, (on what basis ?), trying to make a gradient from two colors (x and y) and “expecting” color (z) in the middle only to get color (not what you wanted) instead.

Like I said before, the more I know the more it get confusing, because there are so many little details all over the place, and if you misunderstood one or two aspects of them, the WHOLE process becomes wrong.

1 Like

Ok let me be more clear. “Filmic” as a whole (not just the look itself) is a custom OCIO config, not an ACES config. That was really poor wording on my part, so I’m sorry.

1 Like

This is true - your EXRs will always output with linear color. The Filmic transform is only applied when viewing the image in display space.

You could use the Filmic OCIO config in Resolve I assume (I haven’t done any ACES work in Resolve so forgive me if I’m wrong here), which would let you reapply the Filmic look in Resolve directly. We will do this in Fusion if we’re using Filmic on a project. But you certainly don’t have to if you have another workflow you prefer.

I’d love to get a better insight into your pipeline. Would you mind sharing some details about how you guys deal with color management on your average project? I’m always curious to see how others are doing it.

Yeah, I’d be happy to. I know this is probably a surface level description, so feel free to ask specific questions if you have any.

We’re not doing anything overly special, to be honest. We use the default ACES 1.0.3 config and set the $OCIO environment variable on all of our systems so programs like Blender and Fusion point to the config by default. This is really helpful in Fusion because you have to set the config file by hand in every OCIO color space node if you don’t.

The general rule of thumb (for us) is that ACEScg is the working color space, so everything that touches the pipeline has to be converted to ACEScg. In Blender, color maps use the generic input texture color space, data maps like normal or bump or whatnot use lin_srgb unless otherwise specified. The nice thing is, things look really wrong in Blender if the color space isn’t right, so it’s pretty easy to tell if something isn’t configured properly.

From there, everything gets rendered to 16-bit or 32-bit EXR files - depending on the output - and composited in Fusion before going to the video editor. Once we’re in Fusion, loaders that deal with anything that’s not a 3D render are immediately piped into an OCIO color space node and converted to ACEScg where we do all of our color manipulation. Sometimes this means converting footage from slog* or rec709, or converting still image assets from sRGB. At the end of the node tree we convert the final comp from ACEScg to sRGB, rec709, slog* or whatever the project requires.

We usually use ProRes MOV or DPX when delivering to editorial - we do high-quality work but we’re not at a level where our clients have super-specific color requirements for projects. Usually, if it’s 12-bit or higher and can be read by Premiere, Final Cut, or Resolve, we’re good to go.

5 Likes