Andrew Price did it again !

I’ve heard of filmic blender but I never knew how to use it. Andrew has a great skill to explain things. I just installed it and it’s a great thing to have in blender.

That marketing crap has side-effects. While most people will just hit the like button, some might be urged to do more (like sending useless mails to the devs). Also this sheds a bad light on the blender devs once again even though it simply is not true.

He could have used a sentence like “I you want more fellow blenderheads know about this cool new thing, hit the thumbs up button or share this video…”

It would be great if the original title and text was edited to reflect that this is not a discovery of Andrew but of Troy Sobotka.
I have nothing against Andrews video or style but credit needs to go where credit is due. A quick glance at the title and the first post is very misleading.

I watched the video, found it informative and interesting - which is why I searched for Filmic here too see what people are saying. What disappoints me in this thread is that while the video may have had some things wrong with it (I’m not qualified to judge on that) unlike this thread it did at least give me some information on Filmic Blender which seems to have been more about ad hominem attacks on the video creator.

To my mind what matters isn’t anything to do with the details on colour spaces and so forth, it’s a simple one of: can it give the user a more realistic result than Cycles can? If it can then surely that’s a good thing, regardless of whether or not Andrew Price got the science correct or what people think of him personally? Perhaps answering that for the benefit of those developing their Blender skills would have been a better use of this thread?

Then go on youtube and blenderguru, tell him and fight this war against his army. I’ll be with you in spirit.:stuck_out_tongue:

I prefer to talk here about the educational quality of the video and filmic_blender rather than see here fanboys from both sides fighting each other for something that does not concern them or way worst for the ethernal stupid battle community vs BI/BF.
This is again the same old story telling devs are constantly victims of a bad community that wastes their precious time, which is not true.
I guess they can handle this shit without problems.
Actually I would be more worried of some possible self-taught in marketing “dev” who reads you and have an idea for the
next podcast whining as a victim of a plot or deriding those few dumb person of the community who said stupid things like that cycles is not 10x faster than before… iYKwIM;)

Yes, it can give you more realistic result than the default sRGB output. It can also give you much less realistic result if you are using it wrong. The key is to understand the tool and the math and physics behind it. Nobody in this thread attacks AP personally. It is just that his explanation of what is behind is sometimes misleading and wrong. Compare it to the claim in the original post and you will see why many people here have to react. Read this thread carefully again and you will find many useful info about this topic.

yes he did it again, misunderstood something to technical for his mind bashed blender on the way, but didn’t care about it and fucked everyone in the ass to sell more products yay!.

Huh? The 32-bit floats are used for calculating the light transport, but those quantities need to be mapped into whatever you can display (commonly 8-bit or 12bit per channel). We’re looking for such a mapping that is also close to what a camera would “naturally” produce as a response to being exposed to those quantities.

In other words, you’re comparing apples and oranges. On the one hand you have the dynamic range of “nature”, i.e. black holes versus the brightest supernovae (or whatever the brightest thing possible could be). On the other hand, you have the dynamic range of a measuring device.

The comparison shown in the video does make sense, even if the amount of stops for Blender’s default mapping might be wrong.

Even without the Troys filmic blender you can tonemap your renders before you save them as 8bit jpegs. You can even do that directly in the blenders compositor. Or you can use any other tonemapping software. It has been available to blender artist since forever and it is definitely not some hidden secret.

It may not technically be a secret (it’s a figure of speech!), but just a few years back you’d see the majority of people not even applying basic gamma correction to the output render or to the input textures. Only since it has become a default do most people (usually unknowingly) apply a linear workflow.

I’d disagree with the claim that the default tonemapping is “wrong”, but it’s definitely not what you need if you want a photographic look. It’s quite common to see people claim that some other path tracer is “more realistic” than Cycles and it’s almost always simply a matter of “better” default tonemapping.

What the hell is wrong with you naysayers. OK, Andrew may not be to everyone’s taste, I get that, and he’s more aimed at the beginner than the experienced used who has already figured stuff out.

Nevertheless, he highlights stuff that others may not have found, and alongside BornCG, presents stuff in an accessible way, even for those of us who have been around the block a few times and may have missed something en-route.

If all you can do is be (non-constructively) critical, then say nothing. If you have something constructive to add, then fine. For the record, I was unaware of filmic until Andrew’s video, and am experimenting now with it, and glad that I am. Better to find something and have the choice, than not to find it at all.

I’m not going to comment on the Andrew bashing, but I will say that while Filmic Blender does do a good job of presenting the scene as a camera would see it, I get the idea that (in some cases) it would be too photographic.

Think about it, the lighting of a place captured by a camera is often different from how the same place looks in real life. Also, the filmic tonemapping of movies can also throw you off as far as material references are concerned because of all of the grading and contrast that is present. The first part can also be seen in images from a lot of other render engines with their brighter areas being more grey and desaturated and dark areas sometimes having much higher saturation.

Sure, I don’t deny the usefulness that Filmic Blender can provide in the realism department (particularly the emulation of photography), but it’s maybe not the best choice if you’re after rich colors, bright highlights, and consistent saturation levels from light to dark.

Kudos to Andrew! This is a very tricky subject. I’m glad he explained it so thoroughly as well as referenced to a solution for Blender.

No, I am not comparing apples to oranges. I am comparing developping RAW files from a camera to developping (tonemapping) RAW renders from Blender. If you have clean (noiseless) 32bit OpenEXR render you can tonemap it as you wish because there are no clipped highlights and you can bring the shadows up as you wish because those low near zero values are not affected by the sensor noise. There is always full dynamic range of the scene captured in this file.
Capturing a full dynamic range with a digital camera is a completely different story because you are limited by the dynamic range of the camera. You can either clip the highlights or underexpose the shot (your shadows will have no details, just noise) or you can do HDRI.
So saying that Blender has smaller dynamic range than a digital camera is nonsense.

But thanks to your explanation I now understand how AP meant this graph. It is true that that the default tonemaping in Blender (does anybody actually use that??) has limited DR (8 stops or 11 - does not matter much) but you can very easily edit the curve in Color Management panel and bring up the shadows and thus enhancing the DR thats displayed on your monitor to almost any value. And this has been in Blender for a while now.

Anyway, I am really looking forward to see the filmic blender in the 2.79. I am sure it will be much easier to use than the current options.

That’s not a valid comparison. Any camera has a “film” or “sensor” response to light stimuli and it’s not necessarily linear. The nature of this response defines (among other things) the dynamic range of the sensor or film. This already is a mapping (from the real world to sensor data), though in practice you need further mapping to display the image (demosaicing, monitor gamma…).

The raw output from Cycles is literally just the (approximate) amount of energy arriving at the pixel footprint and it is always linear. The range is, for all intents and purposes, unbounded (albeit not arbitrarily precise). To get that “camera response”, you use a suitable tonemapping operator. You couldn’t use the same operator on RAW camera data, it’s simply two different things. Noise doesn’t really have anything to do with it either, nor is sensor noise the same as the noise resulting from stochastic sampling (though I can see how you’d think they are related).

So saying that Blender has smaller dynamic range than a digital camera is nonsense.

Nobody said that, though. The default tonemapped output does have a small dynamic range. Changing this default is the whole point of the video.

I love Andrew Price’s tutorials. They’re clear, interesting, and he rehearses each one several times beforehand. Also great are CynicatPro and Alimayo Arango. I love watching their tutorials.

Are there others that anyone would recommend? I’m interested in all levels of usage, including highly advanced stuff that’s well beyond my own skill level.

I’m especially interested in people who are currently actively making new tutorials. Things go out of date so quickly when it comes to blender. Any videos that are more than a year old, are less interesting to me.

Absolutely loving this. I thought the “film response” look thingie was what was meant when people have talked about filmic. And I immediately fell in love with the false color/heat map.

I love his tutorials. Clear, rehearsed, and well laid out. And sometimes even remade if containing significant errors. Some factual errors from time to time, and not enough indepth for my personal taste, but I couldn’t have matched him and most I see in Blender are not up to par (although still valuable). But that is also true for pretty much everything I watch (factual errors and so on). I’ve seen “professional” (non blender) tutorials that are crap in comparison.

AP’s video is explaining the method - what is a standard in the film industry anyways (ACES) - very well. All this information comes from Troy Sobotka who’s a vfx professional.

I can recommend to watch this presentation by Alex Fry from Animal Logic:

A lot of studios out of the film industry (Broadcast or TV Commercials) use a linear workflow but in 3D still the usual sRGB color transform what will make 3D artists in the film industry the eyes rolling. :wink:

But now we have Filmic Blender. This is a game changer for Blender people with its huge benefits. You can’t really make something wrong here. You can crank up lights to a new level and you have to save your renders in OpenEXR for compositing in Blender (with the described limitations in AP’s video what has to be fixed in a future version). It needs some changes in your used way to light a scene. But you will adapt to it very fast.

Take care that you can’t use the renders in OpenEXR in another compositing tool if it doesn’t support OpenColorIO.

Cheers and peace o/

Many Thanks Troy Sobotka. I just love this approach…

Nice Video Andrew

Going to test this with Subsurface scattter tommorow.:eyebrowlift:

Guys calm down.

I think there are always more than one tool to use or ways to walk.
Andrew sometimes uses some cocky expressions however if you look besides that it is
a nice explanation about how to use this new color management option.

I work with Thea Render a lot and it uses also CRF and it works wonders.
I am sure you can do all the same with nodes in the compositor too.

Things also evolve. Like Beer Baron pointed out some years ago people did not even know
what linear color space is and did not prepare textures right …

So why dont you be happy to have another option to work with color adjustments in Blender ?

nevermind this … confused

I liked the video, that said I was a bit surprised that this was posted as some revelation given that I saw this video about ‘Filmic’ by CGCookie back in early january:

which covers a lot of what Andrew Price mentioned, anyway, the huge thanks must go to Troy Sobotka for creating this and making it available for everyone.