Using Filmic Blender with animations, image format?

I’m new to compositing in Blender, and I’m working on my first serious animation project. I’m also new to Filmic Blender. I have questions. Maybe simple, maybe not? I have no idea. I’d be thankful for any answers, or links to relevant tutorials, or general advice if you think there’s something I should know.

  1. I’m assuming it’s better to use logarithmic color encoding for compositing etc. Is this correct?
    Should I keep it in logarithmic encoding before color correction / compositing and then move to sRGB, or should I keep it in logarithmic encoding even in the sequencer, that is, until it’s output into a video file?

  2. A workflow question.
    I have separate Blender files for rendering the 3D objects, for compositing, and for video sequencing. It works, but feels a bit clunky at times. Is that what I should expect from working on animations, or should I somehow be able to combine e.g. the compositing and sequencing files together?

Here’s my current workflow:
I first render the 3D scene and output the result as an image sequence (logarithmic color). Then I use compositor in a different .blend file to do color correction etc, and render the color-corrected images out as another image sequence (currently as sRGB color). Finally I use the those in the video editor/sequencer to do cuts and transitions, and output the final video file.

  1. A format question
    How much color depth is required to make sure “Filmic Log Encoding Base” color space will not degrade, if I save the result out as an image or image sequence?
    In PNG I can choose 8 or 16 bit depth. 8 bits is only 256 values, so too small for a logarithmic scale. 16 bits is enough to represent numbers up to about 36 000, is that enough?
    In OpenEXR, I can choose between half and float. I think half goes up to about 65 000, but approximates extreme values. I think it’s enough?

  2. A question about Filmic/Color Management settings for rendering
    What settings and what format should I look into to render out an image sequence in logarithmic color encoding? I have figured out the following steps on my own, but I’m not sure if this is correct, and if I’m missing something.

I need to change the “look” option in Color Management, under Render, set “View” to ‘Filmic Log Encoding Base’, and “Look” to ‘None’ or one of the other options.
This affects both the 3D scene (that is, rendering 3D objects and the pixel data that is exported when a rendered image is saved), and the compositor (that is, combining render layers and the pixel data that is exported through the “File Output” node).

If I use external image textures, the chosen options also affect all those textures. Since I don’t want to screw up the roughness etc values, I’ve been setting all my textures’ Image Texture nodes to “Non-Color Data”. Everything seems to work and I don’t think it breaks anything. What about environment textures? HDRIs? Will “Non-color data” option break something somewhere?

If I want to read a “filmic log encoding base” image sequence into video sequencer, I need to choose that as the format for Sequencer, from the dropdown menu in the Color Management. Then the “Look” option I’ve chosen under “Render” panel also affects the sequencer and its output.

Did I miss anything?


TL;DR Filmic Blender seemed simple, then I tried rendering animations and now I have no idea what I should do for the image sequences between rendering, compositing and sequencer.

Thanks for your time!

[PART ONE OF TWO DUE TO BA LIMITS]

Some terrific questions here. Hopefully I can do them justice…

If you are using Blender, the actual encode is scene referred linear, the view just looks log. That is, the underlying data is still scene referred linear zero to infinity.

This means that it doesn’t technically matter what the view is set to as it will merely adjust the data it gets fed. The view is a non-destructive set of “magic glasses” that lets you view your work with aesthetic or creative tool views. The view only becomes “baked in” when you save to a display referred format such as TIFF.

Compositing is formulas, and formulas move around data. When it comes to visual energy data for compositing, you absolutely must always use linearized data or else things like blending, overs, blurs, and every other operation will be mangled with nonlinear brokenness.

So the TL;DR is that the view doesn’t really matter, and that you should always use scene referred linear data during compositing. If you are saving files during your pipeline, always save to EXR and you won’t have an issue. The view does of course matter aesthetically during lighting, grading, compositing, etc., as your lighting ratios will be determined likely from what you are being presented with in your View.

General principle is to only grade after everything is done. Don’t grade during compositing. Etc.

That is, grading should be the absolutely last thing you do, after you have all of your renders and everything in order and the piece is complete.

Within Blender, there’s plenty of nuance to be aware of. For example, the VSE is hacked to use “its own” reference space, and that reference space happens to be nonlinear by default. That means that all of your dissolves and overs will be broken as per the rule outlined above. You can set the VSE’s reference space to linear, but then the curves and time curves etc. are also linear, and that means you have to understand the implications of using a linear set of values across time. Phew… sound confused? Welcome to the nuances of colour.

The answer to your question regarding grading, is that it will depend on your application. If you are using Blender, I’d strongly encourage you to master the ASC-CDL variant of the colour balance node. It’s the node that defaults to Lift, Gamma, Gain, but can be changed to the CDL’s slope, offset, and power. If you do your grading in the compositor, you won’t be degrading your image using potential 8 bit code paths within the VSE.

If you are taking your work to the free version of Davinci Resolve or such, it might be wise to flip your final renders to the Filmic Log Encoding Base in a 16 bit TIFF for optimal quality. 8 bit is a wee bit too shallow to encode the data.

A bit clunky, but nicely firewalled I’d say. I’d add one for final grade too.

For a few projects I’ve taken to broadcast I’ve used a similar approach. The trick to keeping things manageable is to use one directory per shot, with subdirectories of the versions of the shots. Render out to those sub-subdirectories for passes, versions, etc. , just keep the exact same frame count and positions. This makes it easy to load up the sequences and test the assemblies from within the VSE as you can simply slug in the new frames to your edit as you go.

There’s more here if you are interested, but likely more well suited for an email as opposed to this forum.

If you stick to EXR all the way up to your final output, you are going to find all of this much easier. Even 16 bit float EXR if disk storage is a premium.

16 bits per channel TIFFs should encode with Filmic Log nearly perceptually losslessly and need to go to a display referred output encoding. Again, within your pipeline, always stick to EXR.

In an ideal world, display referred imagery is encoded to a DPX as it is extremely efficient with log-like encodes, including control of bit depth down to the bit. The issue is that DPX encoding in Blender is absolutely broken, so it is a no-go sadly. Hopefully this can be fixed in the future.

<WARNING, INFLAMMATORY OPINION>
PNG is a worthelss format. Never use it.

All display referred encodings I believe are stored with the View baked in. 16 bit TIFF is probably most optimal for highest quality display referred output.

32 bit float is essentially exactly what the internal Blender buffers use in the compositor and the renderer. You are at optimal quality with an EXR.

Depending on your need, if you must output to a display referred encoding and are feeding software that can handle 16 bit etc. I’d suggest TIFFs. The Log Encoding Base will contain the most data, but you would have to approximate the curves from the contrasts on your own with a manual curve. It should be roughly a gentle S shape if your curve line were running from lower left to upper right, covering the full range of data. If your software supports LUT transforms, you can use the actual Look LUT from the luts directory and get 1:1.

As above, when using a View based system, the View has no impact on the data itself internally. It is non-destructive. If you are saving to a display referred format such as TIFF or JPEG, Blender by default “bakes” the View and the Look into the data. If you save as EXR, it does not matter at all what your View or Look is set to, as Blender will not colour manage the EXR and dump the raw data buffer from the render to disk. This means if you load it again from disk, you must set the (godawful) “View as Render” toggle in the UV Image Editor. The colour space should be set to Linear by default, which is correct. By toggling the (godawful) “View as Render” toggle, you will see your disk loaded image displayed through the View and Look transforms selected.

Looks like you are well on your way to understanding the whole system. If you are storing data, make sure you set the data as “Non Colour Data” and again, save as an EXR. The EXR format is well suited for data storage of all sorts. The “Non Colour Data” means that the data will completely avoid the OCIO transform chain, as it should.

If you are trapped using some display referred encoded formats, again make sure they are set to “Non Colour Data” which will make sure they are treated as data and keep them out of the colour transform chain. There are plenty of nuances here, as I’m sure you have discovered already.

[CONTINUED IN PART II DUE TO BA LIMITS]

This is correct. If you must use the VSE, you’ll find that your blends and overs will be broken. To illustrate this, design a blurry star-like shape with full transparency behind it. The star should be a fully saturated and intense BT.709 / sRGB red colour. That is, intensity 1.0 pure red in the display referred encoded file. Use a blur to blur the star slightly so that you can still see the shape, but the blurry edges extend out well past the original form. Now make a constant background of BT.709 / sRGB cyan, or full intensity blue and green. Perform an over operation with the red star over the cyan background. See that nasty fringing? That’s the wonders of doing manipulations on a nonlinear reference.
There is a summary on the initial slides I did up for a local Meetup with Mike Pan a while back located here. Be sure to hit the “Present” button as there are animations and some of the text / animations won’t be clear if not animated. Use space or click to advance.

That said, assuming you have no manipulations and only hard cuts, with no dissolves etc., you can likely use the VSE with a minimum degree of degradation, as you have already discovered. If the degradation bothers you, you’ll have to design all your cuts and format your work into “reels” and manually composite them in the compositor, where a 32 bit scene referred linear reference space is always used.

Filmic was designed to not only be a useful tool, but also one that hopefully opens folks eyes to colour. Don’t blame Filmic! These issues you are facing were already there, you just never quite saw them as clearly possibly. With enough attention, hopefully the colour issues within Blender can be fixed. It’s tricky stuff to work with, let alone solve design issues in an application.

Please keep posting quality questions like this. Post the stupid ones too, as I’m sure they are helpful to not only yourself, but to many others.

With respect,
TJS

1 Like

OpenEXR is the format that you should always use for all “intermediate files,” up to and including your “final cut.”

(Who cares how big the files are: [external] disk drives are cheap nowadays.)

From this, you then produce the “deliverables” in whatever format you require. Only then do you worry about “gamma,” or “encoding,” or “lossy vs. lossless,” “frame size,” and so on. In each case, you are always drawing from the same, unchanging OpenEXR master, and you are specifically addressing the needs of “that deliverable.” The same read-only master is used as the data-source of each and every Blender job that produces any particular deliverable.

OpenEXR was originally developed by Industrial Light & Magic for its internal purposes, then donated to the industry. It is a high-resolution, linear, loss-less, floating-point format that is (of course …) widely understood by industry-standard tools.

Then, the Blender Foundation(!) extended this to create MultiLayer OpenEXR, which embeds many different named “layers” of information into a single file. This was a very useful improvement that was also subsequently accepted and supported by the industry. (A happy circumstance where the Open Source community “gave back.”)

1 Like

Thanks for the in-depth replies, troy_s!
And thanks sundialsvc4, yeah, I’m gonna stick with OpenEXR for this stuff now.

I don’t blame the tool at all, but I was surprised that I came across so many things I hadn’t considered before. Some of them were very, very simple stuff… Like, for a time there I saved my intermediary files not just as PNGs, but 8-bit PNGs… “Why is there banding in this?” Ugh! XD
I like learning, but this project has dragged on for two weeks longer than I thought, already.

I’ll give a short defense to PNGs and other heresies - I’m used to it because I mostly do game stuff, and I use it for textures. The engines pack it further in various ways, so I’ll keep using it for game stuff. And here’s even more heresy-speak: I’m rather familiar with the Photoshop-style layer mixes and the math behind those, so I regularly use e.g. Multiply, Add and other math stuff under different nodes like Power. I use them to composite the texture / source files into how I want them to be, and on game engine stuff on Unity, for stuff like glowing particles and so on. I’m not using them to combine rendered images or passes together!

Any way, yeah, ut seems using PNGs as an intermediary format for Blender compositing etc was a bad idea. Not going to do that again. I had already moved to OpenEXR because I found that format useful for other compositing stuff. Happy accident there! I also had laready started using a folder system similar to what you described, with each render and each pass in its own folder. My passes are super simple so far, just an object, and a background behind it. I’m going to try compositing some smoke stuff over & around it next.

I plan to do several fades. My awesome student subscription to Creative Cloud (20 bucks for EVERYTHING Adobe… drool) is over as of today, so I can’t use Premiere / AfterEffect without coughing up the extra money. It’s just 26 € for a month to have either Premiere or After Effects, and I’ve used them enough to re-do the simple cuts there, but if the quality difference isn’t drastic, I think I’ll stick with Blender.

When you mentioned that to do fades properly I’d have to do “reels”, do you mean basically a different image sequence where I use a compositor to do a fade by using… I think an Alpha Over node to mix a black color and the source sequence over a time? Since I already have the timings for the cuts, it shouldn’t be that hard, but it would have to come after color correction, right? In a fade-to-black, the black is black, and not color corrected black, right?

This is a lot to take in at once, so I’ll be digesting this slowly, and looking back into this and re-reading your answers. Thanks for now!

The largest gripe against PNG that makes it a showstopper is the fact that it uses unassociated alpha. Complete abortion there.

Once your edit is done, this is relatively trivial. Congratulations on following a solid pipe where you finish your editorial before concerning with these bits.

Way back when, when we old folks cut on film, we had to arrange our workprint into reels. This essentially meant taking the workprint and dividing it up such that dissolves and other elements from our work would have to be sorted into reels that didn’t overlap where dissolves etc. had to be placed.

The same theory works rather fine for Blender. Take the cuts and organize them into as few layers as you can. For example, where shot A dissolves to B, which dissolves to C, you would need a minimum of two reels. The first reel would contain A and C, with black slug between the two to hold the timing. The second reel holds B, with black slug at the head and tail that positions it exactly where the dissolve lands. That is, A and C live on a single layer, with B on an isolated layer. Render out A and C’s reel to EXR, in full, with black slugs. Render out B’s reel with black slugs as well. Note your ins and outs for the dissolves using markers. These can then be exported or loaded into the compositor timeline. Both reels contain exactly the same number of frames. For extra points, drop a test pattern bloop single image frame with a test tone for a frame one or two seconds in of blackness, and one or two seconds in from the tail. You will thank yourself later when you are checking sync on codec audio muxings, or colour encodes in the various codecs.

Load up the two reels as image sequences. You can dissolve now simply by using the mix node and keyframes. Note that everything here is linear, including the curve. That essentially means that if you are using Filmic, your curve should spend 50% of the time getting to mix value of 0.18 (middle grey) and the remaining 50% getting to 1.0 factor.

You could try and hack this via the VSE, however it is also possible to animate your per-shot grades in the compositor and nail it all down in a conform / grading pass on your reels. The added benefit of a full 32 bit float code path and top quality scene referred linear workflow.

I have never come across this thing before, but if I understood this correctly… again, this is used constantly in games. How else can you save Glossiness as the alpha channel of the normal map? The PNGs are used because they are just containers used to store four grayscale values. Sometimes these correspond to RGB or RGBA color values; sometimes not.

If I’ve understood it right, this is done because in some cases it saves the time it takes for a GPU to load a file, and in games that can be significant.

Once your edit is done, this is relatively trivial. Congratulations on following a solid pipe where you finish your editorial before concerning with these bits.

The same theory works rather fine for Blender. Take the cuts and organize them into as few layers as you can. For example, where shot A dissolves to B, which dissolves to C, you would need a minimum of two reels. The first reel would contain A and C, with black slug between the two to hold the timing. The second reel holds B, with black slug at the head and tail that positions it exactly where the dissolve lands. That is, A and C live on a single layer, with B on an isolated layer. Render out A and C’s reel to EXR, in full, with black slugs. Render out B’s reel with black slugs as well. Note your ins and outs for the dissolves using markers. These can then be exported or loaded into the compositor timeline. Both reels contain exactly the same number of frames. For extra points, drop a test pattern bloop single image frame with a test tone for a frame one or two seconds in of blackness, and one or two seconds in from the tail. You will thank yourself later when you are checking sync on codec audio muxings, or colour encodes in the various codecs.

I think I got it. In my case, one of the fades would be to white, so on the A-and-C reel, it’d be something like A-black-C-white, while the other reel would be black-B-black|white-D. The black|white part on B-and-D reel would correspond to C on the A-and-C reel. It’d start black, to make the B-to-black-to-C fade work, and then be switched out into white for the C-to-white-to-D fade.

Load up the two reels as image sequences. You can dissolve now simply by using the mix node and keyframes. Note that everything here is linear, including the curve. That essentially means that if you are using Filmic, your curve should spend 50% of the time getting to mix value of 0.18 (middle grey) and the remaining 50% getting to 1.0 factor.

… Right, the 0.18 thing. Yeah, I can see why linear space isn’t optimal for this sort of stuff. :G

Thanks again!

I was getting a weird problem and I’m not having any luck googling for the answer (but it seems to have solved itself for now).

I have rendered out a scene into EXR files. I’m combining background and foreground pass. I’m saving the result out as another EXR file.

When I open this second EXR file in another Blender scene, the result is in grayscale.

I did SOMETHING in the source file and the problem no longer happens, but I don’t know what it was. I think I changed something in the color management options, but as far as I know, the only option there that could change the image to grayscale would be the grayscale option for Look, and I didn’t have that selected.

One of my Output nodes, which I’ll call ERR, saves grayscale images.
If I copy this node, the copy will not save grayscale images.
If I change the target folder of ERR, it will save new grayscale images to the new folder.
If I change the file format of ERR, it will save grayscale images in the new format.

ERR node ignores color management settings, at least sometimes. Changes in exposure, gamma, color space (Linear, sRGB, anything) didn’t seem to change when I first messed with them, but then later started working.

I’m going to assume this is an actual bug and not a case of me not finding which checkbox I mistakenly clicked.

Thanks for your help! I finished the project and posted it on the Finished projects. Even with the advice, I struggled a lot. I’m more or less happy with the end result, but there were a lot of problems, especially in the techincal stuff like video formats, final resolution etc.