Is "Filmic Blender" the newest buzzword ever since Andrew Price's new video?

Just curious because I’m starting to see a lot of finished renders with Filmic Blender in the name. Yet I’m pretty sure tone-mapping has been around for a while now (o.o)

Filmic Blender is essentially an entirely new approach to tonemapping in Blender (one that is supposed to make it far easier to get a photographic look with ultra-smooth color and brightness falloffs).

Though making your image like something a camera would see, I would agree that it would not be desired by everyone when noting the saturated blacks and the desaturated highlights (but the latest revisions of the Filmic Blender repository also has what amounts to an improved Srgb type profile that you can use instead).

Well it just looks better, so why not use it? But Filmic Blender is used a quite lot since a few months, even before the video was out. And yes it is nothing new, but something new for Blender. And I would say that a lot of Blender users live in their Blender filter bubble not knowing that other Software had better tone-mapping. Look in this forum and count, how often people said that Cycles looks fake compared to other render software. It was almost everytime the tonemapping, but most people couldn’t name it.

Brecht created film looks as nodes to tonemap Cycles renders through compositor.
Then, Color Management using OpenColorIO was added in blender 2.64.
But Brecht’s previous work was re-used and moved to Color Management panel to work in 3DView.

So, Blender’s film looks were created years ago.
Troy Sobotka who was in charge of Color Management wanted to update looks with a more in depth use of OpenColorIO.
And probably to avoïd unfair comparaison with former works made by blender users in bugreports, he did it as a separate project.
But Filmic Blender color profile is a target for 2.79.

“Filmic” has been a CG buzzword for a while now, I guess Blender is just late to the party…

Andrew actually mentions these looks in the video. They’re gone with Filmic Blender as they’re supposedly not to be used with the data Blender actually outputs. I’ve always wondered where these curves come from, the exact same curves show up in Octane and Luxrender, too. Their results always looked way off to me, but then again I have no photography experience with any of these films.

Come to think of it, I would’ve thought that artists would prioritize the idea of the work looking good from a viewer perspective rather than casting it aside to make it ‘correct’ in terms of what goes on behind the scenes?

I know there’s mastering to TV displays and all that, but a lot of shows and movies tend to have heavy amounts of color grading compared to how things would look in real life (either from the camera or from operations done on the output).

Each film manufacturer was using its own chemical formula to make films.
They had to provide data about their products to inform their customers about expected colors for each film.
They did it like lamp manufacturers are providing IES profiles.

So, these curves probably are from a paper re-using manufacturers data.
In a world where majority of population does not encounter the object film, anymore; these references are a little bit obsolete like the use of lens flares or sounds of a vinyl record.
But they may still talk to people who did photography in the 90’s.

If you use a curve that literally says “Agfacolor Futura 100” and it can not even approximate that film responsive correctly, what’s the point of having it? Might as well call it “random shitty instagram curve #27”, that would be way more descriptive to what it effectively is.

There may well be a set of curves that do approximate this film correctly, the curves in Cycles are just the wrong ones because they expect different input. At least that’s my understanding of it, I’m still curious where these curves come from and why exactly they are wrong in this context.

I know there’s mastering to TV displays and all that, but a lot of shows and movies tend to have heavy amounts of color grading compared to how things would look in real life (either from the camera or from operations done on the output).

That’s a different goal, though. If you need something to look like a normal photo that didn’t get the Hollywood treatment (or worse), then there’s nothing artistic about it and correctness is important.

The video claimed that pretty much none of the color mix operators work. If so, what would be the correct way of manipulating passes?

I’m not sure about that one, but if you use these operators on something that isn’t linear (i.e. tonemapped), the result is wrong. Mathematically. However, most major applications have always done this wrong, including Photoshop, which is where most of these operators come from. Therefore, if you want it to do what Photoshop does (i.e. the “wrong thing”), they are right.

Whether it is or not, it’s still better than just slapping on the old film emulation setting and doing or thinking no more. I’ll take it.

That is only so if no one bothers to take the other options into account such as gamma, curves, and the white-point value (which the tweaking of them is required if you want the response curves to actually give a pleasing result). I know people might argue against changing the gamma value, but following the convention is sometimes not the best solution (case in point, some of the best Cycles materials I have seen on this forum are partly the result of breaking the rules).

On the opposite end, people think that Filmic curves (such as with the work from Troy) is literally the only way you should do tonemapping. Not everyone wants their images to look like the heavily graded film you see in movies (due to things like super-saturated blacks and super-desaturated highlights). In a sense, Hollywood may have warped people’s sense of what is realistic because nearly every shot is biased towards either teal or orange (and let’s not forget some of those TV shows where green objects show a blue tinge among other things). That’s not to mention it leads to people glossing over the fact that Filmic isn’t the only tonemapping type out there.

I don’t have a licence to test with, but I believe Nuke uses maths appropriate for scene referred data. As long as the ‘Video colorspace’ checkbox is off:-
http://help.thefoundry.co.uk/nuke/8.0/content/user_guide/merging/merge_operations.html

The ones that are simple math operations (add, multiply, subtract, divide) work fine, as does the “mix” (over) operator. The problem is things like screen or overlay that have a built-in assumption that the input data is 0-1. They do things like check whether something is a highlight or a shadow by checking if it’s greater than 0.5, or inverting data using 1-<input>. These aren’t safe assumptions with scene-referred data.

You’ll know they when they fail you, it results in crap like this: http://blender.stackexchange.com/questions/56960/why-does-the-screen-node-turn-hotspots-pink-and-blue/

1 Like

So if i use a proper PBR workflow can i sidestep all this Blender internal confusion, render out deep and do my compositing/grading somewhere else and not have to think about it?

From what I’ve read /seen, I got the impression of you output to EXR you could, but I’m sure someone with more knowledge than me can clarify that.

@Romanji I think it’s something you have to think about with most render engine out there. I am very new to all this but this pdf has been helpful to me a lot. From my limited understanding the difference in dynamic range from what a render engine can calculate and what you screen output makes this all unavoidable.

1 Like

There is no confusion.

Cycles has always been scene referred, as are all raytracers. The confusion is when folks have grown up with display referred working process and come to the conclusion all imaging is display referred.

Using a more optimal view transform simply reveals the problems in a much more clear way.

By and large Blender is very much entirely scene referred, including most nodes. The exceptions above, and some design issues aside.

What would be excellent is if someone began an issue on the developer site that outlines nodes that are an issue with scene referred workflows.

Other than that, read above regarding JTheNinja’s and Organic’s posts.

1 Like

Personally I would highly prefer the approach where all operations are clearly documented and we have the tools to do gamut and transfer function transforms in comp when necessary. Like Nuke does it.

Sceen, overlay etc don’t have a built in assumption, users have assumptions. If I know what screen op does I can either use it, prepare the input data for it or use something else. What I don’t want is some contorted approach where nodes try to be “clever” and do some funky transform inside which produces nothing but confusion. Screen op is a nice thing (I think of it as inverted multiply) but one can also think of it as a retarded way to do an add in display-referred space.

With all due respect, but i disagree. There is lots of it in my head. :yes: Not trying to put the blame on somebody else but me, but for my defence: i am an artist, not a physicist. I already know more about physics than i actually care about, it comes with the territory, but i just want to make pretty pictures…
I think i get the “problem”, but i am not so sure about the solution, which again is probably my lack of knowledge.
Anyway, thanks for the input.
I’ll cross that bridge when i’ll get there.