Filmic and adhering to Brand Style Guide/Product Colors

What was the context? Was it a logo on a race car? Was it a chyron composited over the picture?

There’s a myriad of places where things can go wrong, and you likely found one.

No, I followed the brand guidelines and got paid because that’s what we do in the real world, no matter if it lines up with the theory or not. I’m done arguing about this, you’re welcome to your theory, I prefer getting paid. I have nothing further to say, so I’m bowing out

1 Like

Ok so blah blah and no context great. You are confused.

No additional context is needed. Matching brand colors is a thing, whether it’s pixels, pantone, or cmyk.

5 Likes

Telling our clients “sorry, I’m right, you’re just confused” isn’t a good way to keep the client. I can live with desaturated colors for 99% of what I do, and builtin transforms work good enough, maybe just a bit of local tone adaption in post needed.

But I too have had to endure the discussion about why my rendered colors look completely off from customer supplied logos (with very pure and specific colors), switched to HDR output, and did the grading/tone mapping elsewhere that retained the colors better.

What’s your solution then, when client rejects the work because their expectations aren’t met?

3 Likes

Doing some test using a simple unlit sRGB jpg with a gradient using Blender 4.0.2.

From top to bottom:

  • All Standard, render comes out identical as expected.

  • Filmic with ‘Filmic sRGB’ transform on the input textures comes out correctly as well.

  • AgX, I was assuming ‘AgX Base sRGB’ would effectively be the inverse output transform but I guess it’s either a bug or it does something else?

(Sorry for the huge image, can only upload 1 image per post as a new user)

And as for the ‘number chasing’ thing… it’s a bit more complex then that. I know companies that take their fabric samples to a neutrally lit room, use a calibrated monitor and scan/photo/eyeball/whatever the color to match the color on screen to the actual fabric sample. Exact science? Probably not but it’s good. So the color on screen, in sRGB space, as would be shown in a browser, is as close as it’s ever going to be to the real thing. And yes someone will eventually view this in a nightclub lit by lasers on a cheap mobile phone screen set to night-comfort-mode etc but that’s not under our control.

Anyways, once our 3d artist get the RGB values it’s exactly what it needs to be, it’s not some clueless office manager going nuts while color picking the brandlogo from a video showing the office building’s logo in full sunlight. So the whole product-to-perception-to-screen-to-perception thing is already been done. Now if we now want to use this color for an item in a render in some photoreal 3d scene that is using ACES, Filmic, AgX or whatever OCIO setup then there should be a path to do so.

2 Likes

I read along the discussion and wonder if you tryed the new Neutral PBR tonemapper,since it was developed for e-commerce renderings in mind?

The Neutral PBR tonemapper keeps the values linear from 0-0.8 and the last 0.8-1.0 has a highlight desaturation falloff.

1 Like

Thanks, that looks really really interesting!

Colour cannot be matched via stimuli. It’s that simple.

False.

If the stimuli is attempting to be displayed in a “flat field” and there’s no “pictorial” sort of rendering, then matching the tristimuli will seem reasonable.

But the moment we are talking about a pictorial depiction, this is going to go sideways. Physical phenomena such as occlusion along the gain dimensions, or offset dimension interactions such as diffuse scattering and any number of other things will interact here.

Our cognition is what is responsible for calculating the colour, and that is entirely visual field dependent.

If we insist on going tristimuli, then the brand identity colour will indeed be lost. For example, if our brand identity marketing peep says “The brand identity is R=G=B”, the actual computed cognized colour will vary depending on the field it is embedded in.

It’s just a fact of life that colour isn’t stimuli, and chasing numbers will always yield the wrong computed colour cognitively.

If the “client” insists on brand identity tristimuli, the easiest way to educate them that they are on precarious ground would be to pull a Cryptomatte ID or EXRID, and swap in the brand identity value for whatever medium you are on. Composite in the precise numerical value, and see how it goes.

It will look completely pooched, but that will keep the number fornicators at bay, and put food in your mouth.

You should tell that to the United Way. It’ll make for a great company in-joke around the water cooler

Ah yes, a typical client-service provider relationship. The service provider tells the client everything they’re wrong and uneducated about and then the client fires the service provider. I for one love when my hair cutter tells me I’m wrong about how I want my hair and gives me a mohawk instead, and my web clients have always been grateful and appreciative when I tell them that they’re wrong about what looks good in a logo

2 Likes

I get where you’re coming from on an academic level and we all know colors perception is context sensitive, our visual cortex is very bad at taking absolute measurements etc.

But a brand color for digital media is basically defined as an RGB value on a screen so visual context is irrelevant. There is no different RGB value for a logo on a white, red of green background. There are usually restrictions on what background colors a logo can be used though. It might look different, feel different but the RGB values are simply leading here and that’s a good thing because it ends the discussion. No designers want to go back and forth because Jenny from accounting thinks the color is a bit off. This is all not really for rendering, it’s more for a photoshop context where a logo is a graphics in a design.

With rendering, as with video recordings or photography, the color will be part of a multitude of material parameters that determine the interaction with light and that will change it’s exact values on screen, that’s all fine and accepted.

But here is the problem… when I make a virtual material that doesn’t interact with light, like the unlit or self-emission version, I do expect it to come out exactly as it should be. If there already is a hue shift at his point the final render will definitely be off target. Why would I want this? Just as a test to confirm the base color isn’t messed up. When it’s input/output is mathematically correct we’ll use the normal shaders etc and have light interaction have it’s way with it. And when lit with neutral non-colored light, tone mapped etc the hue should be fine.

So we render a product with some neutral lights, a proper material shader that’s using the color and in the same render we have a small unlit verification patch. All this on a white background. When the patch matches the color values from the calibrated photograph/scan/etc from a few post back we’re good. If it’s off then we’re wrong.

I don’t see any other way to have some sort of ground-truth here to work with. And this workflow simply delivers renders that make the items in the webshop look the same as the reference photograph.

4 Likes

Well, you’ve said in the past that no one has figured out how the brain works to determine what color is.

And as I can’t evaluate it psychically, I’ll just keep using my eyes… and on occasion, get a second opinion from someone else with functioning vision.

Sometimes it’s a matter of what values show on the color picker. Sometimes it also involves a scope to see the spread and check for crushing. In any case, there’s no button that reproduces what I imagine in my head, so at some point in the process one has to pull the trigger.

3 Likes

By the way, the version of AgX that was integrated into blender does not use the invertible transformation hack that was used during its development time, that is why using it as input transform does not give the expected results.

(focus on “to_scene_reference”)

1 Like

This isn’t quite the case is it? If it is an “icon” or “business card”, we can certainly insist that the spectral composition of a given ink or value is of a given composition. That’s not the totality of the issue here.

When we take that “business card” and embed it in a field in front of us, the actual stimuli reaching our retina that is getting transformed into an increment vs decrement optical signal is not what we think we are looking at. That “colour” is computed internally.

Now if we insist that the values match the brand identity colour, we are in fact countering the mathematics of the rendering engine, and we are most certainly leading to values that we do not want relative to the formed picture.

As an example, consider the door here.

Everyone of “average” visual assemblies will agree that it’s a “blue door”. But what we aren’t realizing is that if we interrogate the stimuli outside of the context of the embedded fields, the door contains no such stimuli that we would all agree is a satisfactory “match” to the colour we thought we “saw”.

How can this be?

It’s likely a complex neurophysiological normalization-like approach that is transforming the embedded field energy into a unique state that is processed into what we call “colour”. The reality is that it’s not in the stimuli.

So what would happen if we took out the “beauty” pass from the “diffuse” and other things and forced in the “colour” we thought we “wanted”? The answer is worth an experiment or two, to understand for oneself.

100%. But this is the key case I was referring to above. In this case, say for example the idea that we are taking some documented brand identity colour for Coca-Cola and making a logo for a chyron, we have an already formed picture in the stimuli of the logo. It does not make sense to roll that through an algorithm that is designed to form a picture again! In this case, the colourimetric linear values are “already finished”, and it makes sense to simply encode those directly for display. In that case, it means applying the display medium’s colourimetric transform and inverse EOTF and done.

But what about the case where we are compositing a chyron over a formed picture? This is a little bit trickier, as we want to properly form the picture, and leave the embedded fields of the pictorial depiction as is, but we also want the stimuli of the formed picture of the chyron logo to also be of some specific relative value. In this case, the most reasonable approach is to composite the formed pictures together, as previously formed pictures. To use the AgX analogy, we would have two closed domain linear colourimetric assemblies:

  1. The logo in the picture being formed by way of something like AgX. The output of this is a linear colourimetric “picture”, fully formed.
  2. The logo in the chyron which was formed some other method. The output of this is also a linear colourimetric “picture”, fully formed.

We would want to composite 2 over 1 in some way, using the closed domain linear composite, and then encode both for display. And even then, the computed colour of the logo will drift in the embedded field of the composite.

The “base colour” here is a “linear” texture as albedo likely, and you’d view that under the inverse display EOTF encoding directly. If it is properly encoded though, it will require a “lifted” floor and a “slightly compressed” ceiling, as most albedos for this sort of application don’t have a “zero energy” reflectance, nor “100% energy” reflectance.

One does not want to likely roll that through a picture formation chain, as the analysis will result in that “embedding” assumption above.

Any “reference photograph” will have been rolled through a picture formation chain. There’s no escaping this aspect.

Agree 1000% with this, and trusting what we are examining, with an acceptance that we modulate our thought process, is indeed about as good as it gets. What I was issuing caution over is to chase numbers, as this will always lead to the wrong effect. I have a story about someone who was designing logo type on a fully blue loading screen. Sure enough, their producer was losing their minds over the fact that the text was “yellow”, which is in fact 100% accurate. If the same producer were to insist that the logo text be R=G=B, well… you can imagine the impossible gauntlet that was established.

We all win when more and more people come around to working away from the two hundred year old fiction, and become aware that stimuli is not colour.

1 Like

look, nobody i’s disputing what you’re saying, it’s just a case of adapting the software to the world or try to adjust the world to the software.

There was this person who implemented the EXR standard into Photoshop and insisted that the alpha channel had nothing to do with opacity. Even though everybody in the industry knows that when your talking about an alpha channel it’s always about the opacity. In theory he’s right because the alpha channel is just an aux channel that can hold anything. But the exr standard only mentioned alpha channel so he was not going to hook that into the opacity. Even the original authors of the exr specs jumped in and said alpha = opacity but still no… and till this day exr support in Photoshop is messed up.

5 Likes

OK, ran into this discussion…

After all this, there’s no real solution given other than a theoretical lecture on how we perceive color?

I feel it’s a legit question that would help a lot of other users as well, because Blender doesn’t really ‘link’ to standards ‘out there’.
Nothing against Agx or Filmic, but it’s not my color cup of tea treatment tbh…

By the way, how are you doing using the invertible transformation to bypass the view transform?

It’s not a legitimate question.

All of the historical picture formations are not invertible, and now because a few folks have fully bought into a broken conceptual framework designed by Kodak, folks expect invertibility. This is nonsense.

It is a fool’s errand to try and derive the energy from a formed picture as the problem is ill defined.

If someone seeks such nonsense, the net result is a picture formation that is broken fundamentally.

The two are incompatible ideas.

Maybe you’ll understand the workflow issue a better with an actual example. Image A is actually what the client needs, it resembles the look and feel of the product, and matches the reference photograph (which is made in a similar white studio setup). So this bright green it is.

So at some point we have the models and are supplied with rgb values, one of those rgb colors is what I’ve put in the color swatch. Situation A: everybody happy, Situation B obviously not so much.

And it’s not a matter of a simple color correction because if you look at the bottom left in the comparisons grid, some colors are actually ok-ish while others are completely of the rails, not to mention multi color products. And since we’re talking automated processing with a lot of models and color combinations there is no room for manual fixes here.

The neutral tonemapper is actually working perfectly for us, predictable colors up to 0.8 intensity above which highlight compression and desaturation is applied but that’s fine.

So I hope this makes clear that “chasing numbers” is perfectly legit and even required in this case.

8 Likes