(leandrooliveira) #1

Hi everyone.

I noted that Filmic is in version 2.79. In Compositing, how is the workflow when I work with a image sRGB? Sometimes I have to composite EXR render in a real photo (sRGB image). When I turn on Filmic, EXR is ok, but the photo became washed. May I use gamma node to correct the photo? If yes, which value? Because 2.2 or .04545 does not work…


(troy_s) #2

Welcome to a bit of a rabbit hole here…

Technically a photo begins life as a series of values that could be modelled under a scene referred system. When you take the photo and download it as a JPEG or such, it is labelled “sRGB” due to the colour of the primary lights the encoded values are referencing, but the actual intensity curve mapping is not sRGB; it is a secret sauce camera curve that, unless you are using SLog, VLog, or some other log, is unrecoverable.

So what is a poor pixel pusher to do?

The answer is a little tricky. If you simply want the display referred output from Filmic to match the “white” values in the original JPEG, you could cheat and identify the colour space of the image as “Filmic Log Encoding”. This won’t be terribly accurate, but it should result in your values ending up close-ish to what you saw in the JPEG.

There are a few other options to try and get a closer set of intensity values that could work better with Filmic, but they are all hacks and guesses; the information you need is lost forever. I can go into them if you are interested, but I will leave it at this for now.

(leandrooliveira) #3

Thanks, troy_s. I used a Gamma node after the photo. I tested some values and I choose 1.2. For now, it´s ok, but I hope this workflow be better in the future. In Maya, I used use a gamma node direct in the texture, making the de-gamma with a value of 0.4545, the inverse of 2.2.

(troy_s) #4

This is broken logic as power functions do not transfer values into the scene referred domain. It is a bit of a misunderstanding about scene referred linear workflow.

There is no real way to improve a workflow that includes broken assets. The best we can do is massage them into something almost usable, where they must absolutely be used.

(leandrooliveira) #5

Hi guys and troy_s.

I´m bringing this tread back because I didn´t find a good workflow to work with Filmic in Composite. Besides the sRGB photo issue, I noted that some passes are effected as well (AO and Mist, for exemple). The intensity, specially of the white values are dull, when Filmic is eneable.

I´d like to hear from you, how do you guys deal with this.

Thanks in advance.

(troy_s) #6


Your idea about what “white” is needs a little exploration. There are plenty of threads here, on Blender Stack Exchange, and elsewhere on the net. I’d search for “scene referred” and “albedo” to start.

If you “didn’t find a good workflow to work with Filmic” your workflow is busted up and needs to be rethought in terms of how scene’s and energy works. In the longer term, if you begin to understand how your current mental model is broken, you’ll see much better results.

Good luck…

A few points that you need to accept:

  1. “De-gammaing” means removing the transfer function on a nonlinear image. In 99% of cases, applying a power function on an image texture isn’t going to linearise it for use in a 3D application.
  2. Understanding what an albedo value is is deadly important.
  3. Understanding that a scene’s ratios of light energy goes from zero to infinity is important.

(leandrooliveira) #7

Thanks, troy_s.
I´ll start my research. When I was a Maya user, the workflow was a bit different.

But, thanks a lot for pointing a way to me.

(troy_s) #8

It will be easier to help you out if you do a small bit of research into both “scene referred” and “albedo” first.

The main thing to understand is that reflectance values are linear ratios from 0.0 to 100.0%. If you are designing physically plausible surfaces, those values will never be 0.0 (black hole) nor 1.0 (a physically impossible mirror).

Given that they are encoded linearly, and also given that most people are trying to use photographs to make their textures, it requires a bit of mental hopping to appreciate the nuance of the complexity.

If you have a single light source, and somehow manage to block out all non-diffuse light reflections in the photo via polarizing techniques, and expose properly, and use the unprocessed camera data, you have a hope of getting a decent albedo ratio out of a texture image. Sadly, online, people aren’t aiming for “data” in a majority of cases, and apply aesthetic twists to the photo.

The easiest way to think of these “aesthetic tweaks” is adjusting the data away from the linear scene reflectance values to something more nonlinear and tweaked. This forms a sort of knot / encoding, that without instructions, cannot be undone. If I have put a knot in a piece of rope that is a simple square knot for example, then I also put more knots on top of it, I can’t say “just undo as you would a square not” and expect you to untie it. Same goes for the data; it will typically end up bent.

Even if the data isn’t mangled too badly, you can also see that the range of a photo for a texture frequently covers a wide chunk of the code values in the image, including 0.0 and 1.0, which are those unlikely values. In addition to this, the “proper exposure” that a texture is taken at frequently is the wrong albedo level for an actual surface material. A good example here would be objects that reflect very little light, like asphalt. How would we photograph it? It would need to be very low albedo, but if we expose it as such, the image would have very little data. Likewise, if we over expose it to capture the data, we would need to carefully scale that image back down to the proper albedo level for asphalt.

Phew. Quite a rabbit hole, and that doesn’t even begin to cover the larger mental hurdle of understanding the difference between scene referred and display referred data ranges.

If you can, try to research your particular issue more closely, and report back with a bit more information and context, and I’m sure the folks around here can help you out.

(3rr0r) #9

The filmic workflow is quite simple theoretically, but can be a bit tricky practically, because of the correct albedo problem. The idea here is that you need a correct combination of albedos of all the materials in the scene and light intensity in order to get the correct light behaviour. That way you won’t get dull whites. As a matter of fact, almost all albedo textures you can think of must be darker, than you imagine, as almost the brightest thing you can render is pure white snow, which has the albedo about 0.85, which is quite gray-ish on the texture. Printer paper for example is 0.6 - 0.7. Then crank up the lights intensities. And you want actually paint out your textures in a paint program with the filmic OCIO enabled, i guess.
Troy still is not tired of explaining people the principles, though :slight_smile:
I’ve tried to explain from a user perspective. Am i correct, Troy?