Technically a photo begins life as a series of values that could be modelled under a scene referred system. When you take the photo and download it as a JPEG or such, it is labelled “sRGB” due to the colour of the primary lights the encoded values are referencing, but the actual intensity curve mapping is not sRGB; it is a secret sauce camera curve that, unless you are using SLog, VLog, or some other log, is unrecoverable.
So what is a poor pixel pusher to do?
The answer is a little tricky. If you simply want the display referred output from Filmic to match the “white” values in the original JPEG, you could cheat and identify the colour space of the image as “Filmic Log Encoding”. This won’t be terribly accurate, but it should result in your values ending up close-ish to what you saw in the JPEG.
There are a few other options to try and get a closer set of intensity values that could work better with Filmic, but they are all hacks and guesses; the information you need is lost forever. I can go into them if you are interested, but I will leave it at this for now.
Thanks, troy_s. I used a Gamma node after the photo. I tested some values and I choose 1.2. For now, it´s ok, but I hope this workflow be better in the future. In Maya, I used use a gamma node direct in the texture, making the de-gamma with a value of 0.4545, the inverse of 2.2.
I´m bringing this tread back because I didn´t find a good workflow to work with Filmic in Composite. Besides the sRGB photo issue, I noted that some passes are effected as well (AO and Mist, for exemple). The intensity, specially of the white values are dull, when Filmic is eneable.
I´d like to hear from you, how do you guys deal with this.
Your idea about what “white” is needs a little exploration. There are plenty of threads here, on Blender Stack Exchange, and elsewhere on the net. I’d search for “scene referred” and “albedo” to start.
If you “didn’t find a good workflow to work with Filmic” your workflow is busted up and needs to be rethought in terms of how scene’s and energy works. In the longer term, if you begin to understand how your current mental model is broken, you’ll see much better results.
A few points that you need to accept:
“De-gammaing” means removing the transfer function on a nonlinear image. In 99% of cases, applying a power function on an image texture isn’t going to linearise it for use in a 3D application.
Understanding what an albedo value is is deadly important.
Understanding that a scene’s ratios of light energy goes from zero to infinity is important.
It will be easier to help you out if you do a small bit of research into both “scene referred” and “albedo” first.
The main thing to understand is that reflectance values are linear ratios from 0.0 to 100.0%. If you are designing physically plausible surfaces, those values will never be 0.0 (black hole) nor 1.0 (a physically impossible mirror).
Given that they are encoded linearly, and also given that most people are trying to use photographs to make their textures, it requires a bit of mental hopping to appreciate the nuance of the complexity.
If you have a single light source, and somehow manage to block out all non-diffuse light reflections in the photo via polarizing techniques, and expose properly, and use the unprocessed camera data, you have a hope of getting a decent albedo ratio out of a texture image. Sadly, online, people aren’t aiming for “data” in a majority of cases, and apply aesthetic twists to the photo.
The easiest way to think of these “aesthetic tweaks” is adjusting the data away from the linear scene reflectance values to something more nonlinear and tweaked. This forms a sort of knot / encoding, that without instructions, cannot be undone. If I have put a knot in a piece of rope that is a simple square knot for example, then I also put more knots on top of it, I can’t say “just undo as you would a square not” and expect you to untie it. Same goes for the data; it will typically end up bent.
Even if the data isn’t mangled too badly, you can also see that the range of a photo for a texture frequently covers a wide chunk of the code values in the image, including 0.0 and 1.0, which are those unlikely values. In addition to this, the “proper exposure” that a texture is taken at frequently is the wrong albedo level for an actual surface material. A good example here would be objects that reflect very little light, like asphalt. How would we photograph it? It would need to be very low albedo, but if we expose it as such, the image would have very little data. Likewise, if we over expose it to capture the data, we would need to carefully scale that image back down to the proper albedo level for asphalt.
Phew. Quite a rabbit hole, and that doesn’t even begin to cover the larger mental hurdle of understanding the difference between scene referred and display referred data ranges.
If you can, try to research your particular issue more closely, and report back with a bit more information and context, and I’m sure the folks around here can help you out.
The filmic workflow is quite simple theoretically, but can be a bit tricky practically, because of the correct albedo problem. The idea here is that you need a correct combination of albedos of all the materials in the scene and light intensity in order to get the correct light behaviour. That way you won’t get dull whites. As a matter of fact, almost all albedo textures you can think of must be darker, than you imagine, as almost the brightest thing you can render is pure white snow, which has the albedo about 0.85, which is quite gray-ish on the texture. Printer paper for example is 0.6 - 0.7. Then crank up the lights intensities. And you want actually paint out your textures in a paint program with the filmic OCIO enabled, i guess.
Troy still is not tired of explaining people the principles, though
I’ve tried to explain from a user perspective. Am i correct, Troy?
So, Filmic is used in “solid” mode in 2.80? It doesn’t bother me too much, but I noticed that normal+y matcap doesn’t look right in the viewport. Changing Color Management settings doesn’t affect the viewport in solid mode. Once you render it you can change it from “filmic” to “default” and then it will look right. So I wonder if Filmic in solid mode is necessary since I thought it’s meant for HDR?
Filmic is a view transform that maps the higher dynamic values in your scene down to the limited 0-1 output range. It does a better job of preserving the true nature of your renders than the default transform which just castrates your values outside of 0-1.
So basically, if you’re rendering out your scene, definitaly use it. If you’re using the normals matcap for some kind of screen space normals render, linear is probably the way to go ( But I’m not sure how that matcap behaves).
Edit: Reading your post again, maybe I misunderstood your question… Or maybe not. Been a long day. You dicide. Haha
That is indeed odd that filmic would affect solid view mode. Maybe a consequence of the render modes being more tightly connected in 2.80 (shadows in solid view/overlays in render view), rather than completly seperate modes.
But then, you say changing the CM settings has no effect in solid view, so a bug perhaps?
as far as I know, the white point for filmic is somewhere around 16 for R,G and B. Low dynamic range images (png, jpg…) only reach 1, which correspends to a light gray in filmic, that explains why it looks washed out.
The proper-ish way to do it is to render your scenes in Filmic, and then when you’re using the compositor or video editor you should revert back to sRGB.
If you need Filmic (eg: mixing images with your current render), you could roughly use an RGB curves node, find the white point value (like I said, it is somewhere around RGB[16.0,16.0,16.0]) and then use a gamma node and maybe increase the saturation to reverse some of the filmic effect
How nice of them to just assume filmic for everything and forget about the edge cases.
@Tvaroog, I wanted to see the problem for myself so I ran some tests. Maybe you already caught this, but if not make sure to change the color mode from the default [material] to [object] even if there is no material applied. The normals were coming out slightly darker than they should, and this was the fix. Of course, you still have to render and then change the view transform to default…
It would be nice to work out a way for people who want to composite footage in in the same pass as the 3D render - especially with the speed of Eevee. Would it be case of multiplying by 16 and then putting through a very high Gamma?
I’ve been trying to explain the importance of an OCIO node since OCIO was integrated, however here, this isn’t the proper solution. Using proper assets is.
Quite the opposite.
If you do this in an NLE or compositor, you’ve blown your energy ratios and all subsequent manipulations will fall apart; blurs will look wrong, overs wrong, etc. See below.
This is a rather large subject, and worth understanding, but likely for another thread.
Hacking around a proper workflow will yield pretty awful results. Ideally your materials are physically plausible materials that would work in any identical reference space, subject to whatever camera rendering transforms and aesthetic looks are applied. Bending the values to the output is extremely problematic.
Prudent is to simply have a fully pixel managed pipeline in Blender, but sadly it isn’t even on the map of current changes. I can explain this further, but perhaps reading the examples over at Cinematic Color would be a better solution, as it is also endorsed by the Visual Effects Society and offers good workflow practice. In particular, look at page 33 under Compositing.
Remember too that there was a time not too long ago where none of the developers were aware of pixel management. Sadly, a good number still sadly ignore it, or collectively don’t feel it is a core issue, despite it being the very basis of the entire media pipeline.
Part of the onus is on the pixel pushers to understand what is at stake and take the totality of the context into consideration; there’s no workarounds or easy solutions to complex problems. Simply pixel manage the entire pipe, and provide the UI interface elements as outlined in that other thread.
That comes at an added “cost” of “complexity”, as does sitting into the cockpit of a commercial jet versus a smaller plane, but it seems one that the community is ready to take that responsibility on and help educate the newer folks who might be confused by the dramatic differences.
It’s trying to fix something that has an impossible knot to untie. Start from proper assets properly transformed for use in the work. HDRIs, camera encoded “raw” linear files, log encoded footage, etc.
While this might seem depressing reading the issues, be thankful that all of you are now discussing it. This wasn’t the case about eighteen months ago.
It’s also up to everyone to understand that we are talking about rendering data of some form to a display. That means that no one-size-fits-all simple solution works as we can never know the particular context. It can only properly solved with a proper pixel management pipe. The other thread has more details for those interested.
While everyone is oohing and ahing at a procedural online viewport, it might be worth taking a step back and solving alpha issues and the resultant file encoding problems, as well as insisting developers properly pixel manage the entire pipeline, is something to get firm before moving any further forwards?