Blender 2.8 wrong colors

Hey there, has anyone experienced this with Blender 2.8?
2019-01-11%2013_18_20-Window
As you can see in the Screenshot, the color that should be pure white clearly isn’t - so the colors are all darker than they should be. The same applies to all colors in the viewport.
I thought it might be related to my OS color-preferences or my Color Management settings, but I don’t see what’d be wrong there.


It happens in todays build and also in the other one I got from Jan 5th.
Blender 2.79 is fine.
I’d appreciate if someone could lead me in the right direction here.

1 Like

I have noticed the same thing in 2.8. Only noticed it on White myself. But I guess it could very well be applying to all colors. More than likely just a bug.

1 Like

Yes, it is a bug that affects apparently the whole color palette. I coppied a HEX code from web and the colors displayed incorrectly inside the render (emission shader 1.0 - should look exactly the same as on web)… It works fine under 2.79… is there a bug report for this one already open?

Here is the solution:

Render tab --> Color Management --> Set view transform to “Default”

Blender 2.8 decided to go with “Filmic” for the sake of realistic renders, which clashes with HEX color accuracy across different software.

2 Likes

What you did was copy a code meant for 2D applications but in Blender we are dealing albedo values in a scene referred domain which is different. That it seems to work when you set the view transform, I wouldn’t perse call it correct. @troy_s could explain this with correct terminology. I hope the discussion with the film-curve was not to dissapointing, and that he still has the energy to shine a light on it (again and again).

1 Like

That didn’t change anything for me and even if it would, it wouldn’t be a viable option to not have filmic enabled.

I’m aware of the filmic curve. It totally makes sense to display the colors in a scene with this tonemap applied. But I’m sure that @troy_s will aggree with me, that it should never be used to display UI Elements like the color picker. Here you want to see the actual color that you select.
IMO it’s just a bug - I’ll search for a bug report or create one if it doesn’t exist on Friday.

2 Likes

You did just fine. I’ll try to flesh out what you started a bit…

Part of the problem is that while a good number of folks have come around to understanding the transfer function (aka “tone map”) between the scene referred domain and the display referred, they aren’t quite seeing the whole picture.

Without digging too deep into the details, it’s wise to think of a View as a Camera Rendering Transform in much the same way that a camera takes spectral values outside the lens and bakes them down in a unique fashion. Specifically, it means that whatever is outside the lens, in the CGI reference space in this case, isn’t assured to be captured as some sort of “ground truth reality.” Same applies to camera rendering transforms, especially Filmic.

So a few points:

  1. Blender needs full pixel management. As anyone who has been following the Twitter flame eruption as a result of my post over at the developer reports, you’ll see I am rather adamant in this.
  2. Being fully managed means that you can never know what a pixel pusher is doing. Are they painting emissions? They would want the camera rendering transform applied to the picker so they can pick the emission roughly. Perhaps an emission without a look so you can pick the value “beneath” the look? That would be a similar transform above, yet different. Normals? That’s data, so linear data according to some standard that may or may not be Blender’s. Albedo? That’s a measurement of reflectance normally from 0.0 to 1.0, and the representation is different again. How about some predefined range for a bump or depth map? That too is linear data, but using a different set of bounds. The proper path to solving this is to not try to be smart and know what the pixel pusher is doing and provide the transform selection for this purpose.

There is already a report in the tracker, and folks have become more and more aware of the issues but haven’t quite thought through all of the complexity. The solution is to have Blender fully pixel managed. It isn’t a priority, and if the last report I made is any indication, it won’t be until more pixel pushers round out their knowledge and understanding to make a case for it in the software.

TL;DR:

  1. Never expect an albedo value to match what you see in a display referred application from an sRGB value. This is a longer discussion I’m happy to explain if anyone actually cares. “Hex codes” really are anachronistic and hide some of the essential concepts behind pixel management. It’s prudent to fully understand them.
  2. Don’t assume that a picker is dead simple to get right; different contexts requires different transforms.
  3. Don’t expect Blender to fix the code until more people have these sorts of important discussions.

Good on you all. Keep it up. It’s damn refreshing to see these subjects come up on BA.

If anything I’ve said is confusing, please ask and I’ll do my best to try and explain myself further.

False! Again, I’ll happily explain this in another thread if someone cares. Make no mistake though, this is a bit of a misunderstanding as to the differences between what is happening in a scene referred versus a display referred DCC application, as well as entirely subject to the reference space chosen for the internal work.

3 Likes

Thank you for your insight! Since I now know that color pickers use filmic as View Transform, I played around with it a little longer. Apparently only colors that will actually be rendered are affected by it (not for example the theme editor). Maybe I was too fast to judge - having the color picker represent the transformed colors actually isn’t too bad since for example it allows you to see the correct desaturation of colors with values > 1.0 realistically. It’s just confusing when using it the first time.

Still there are two things that are wrong to me:

  1. colors pickers (at least for me) still use filmic when another View Transform is selected in Color Management.

  2. The Workbench engine also seems to use filmic. I was always wondering why the studio lights view look so washed out - this is the reason. It makes it unnecessarily hard to spot details in the mesh. Also Matcaps should be already colormanaged(?) so they look wrong too when using filmic.


the viewport as it’s seen in the viewport solid mode


how it should be imo (using worbench with srgb as renderengine))

Kinda Offtopic: Since filmic also affects Emission Shaders, it might be necessary to implement a “Constant” Shader for Cycles that displays textures with their own (srgb) view transform. Otherwise it’s impossible to add for example background photos that already are colormanaged by the camera. I imagine that’s not an easy thing with how color management currently works in Blender since View Transforms are usually just slapped on the whole image. Maybe it could be implemented in Cycles itself instead? Correct me If I’m wrong - I’m not familiar with Blenders source.

All UI requires a transform selection so that the proper picker can be used based on what is needed. This isn’t the View transform on a render, but per UI control. An albedo is different from a normal is different from an emission is different from a depth is different from etc…

One would always want a proper rendering of the camera rendering transform, but of course there are times where it might require an aesthetic tweak to augment the output to meet a particular need. Something like an extreme contrast setup for example, could be useful.

Again, this requires proper pixel management to implement. Hard coded isn’t the greatest idea, and most certainly the sRGB OETF is not a solution here as you are viewing the byproduct of a path tracer, and the sRGB OETF is simply awful as a camera rendering transform. Multiple views of the same material, such as with varying looks for example, would be closer to a useful UI option, enabling the same material viewable under a series of different looks.

Samples accumulate in a path tracing system, and the idea of using a transform for display referred emulation is simply wrong here. Should other camera rendering transforms be permitted here? Absolutely, as again, one can never know the precise need context of the person looking. However, the sRGB OETF is a worse choice.

Background photos aren’t suitable for intermixing with CGI if they already are aesthetically baked. To properly integrate photographic sources, it requires them to be able to recover their emissions, such as via log encoded camera files, linear camera raw encodings, etc. otherwise the energy levels are completely wrong in relation to the path tracing engine’s output. There will always be a disconnect there as the idea that you can merge an aesthetically rendered photo with the energy values is a naive one.

Folks frequently attempt to massage imagery from photos into path tracing / CGI work and they always leap out of the image as being broken for this very reason, despite tremendous additional effort.

2 Likes

@troy_s , is this the correct representation of how the default view cuts of some values:
01_DefaultViewCut

Currently, LookDev and Solid modes are hardcoded to use Filmic. I think this should not be the case - at the very least for LookDev, but even Solid mode I think should just follow the CM setting IMO.

It’s as simple as understanding that the sRGB OETF was designed to characterize a typical CRT display. “This is a standard that describes roughly how a CRT display responds in relation to a display linear signal.”

The key part being display linear, which means the signal has already been compressed down to fit within a minimum and maximum value.

While our perceptual system always demands linear light to see correctly, the output from a conventional SDR display always is display linear light, stuck between the minimum and maximum light level of the display. No matter how hard we try, the display cannot be coaxed to push out more or less display linear light than the code signal says. “Hey display, please show your maximum emission of display linear light” == 1.0 while “Hey display, please show your minimum emission of display linear light” == 0.0.

If you look out your window however, the emissions being projected into your eyes extend from some very small value to some tremendously large value. Where your perceptual system focuses and the context around it will determine what range your perceptual system will see, but it is always a volume of light emissions that is some ridiculously small value to some potentially tremendously large one in the scene. Our perceptual system does the dynamic psychophysical mapping into what we are thinking we are seeing.

We can easily create a representation of that scene in an internal computer model, but there’s a problem. How do we cram that infinite range of scene linear light values into the display linear range? If we simply scale the values down, we are essentially lowering the exposure down depending on the highest value. We can ignore some values above and below a certain range, but if we are too agressive, the result will look awful.

In the end, this “magical cramming step” is our camera rendering transform, and there really is a magic to it. If you aren’t careful in how you massage and cram the values into the display linear range, you can end up with pretty nasty looking output. If anyone doesn’t believe me, go grab a DCC application for processing camera files and muck about with the various RGB curves; the imagery can easily look alien as hell even though certain technical attributes are achieved.

As such, camera rendering transforms are largely a creative endeavour in much the same way picking a film stock used to be for photographers.

TL;DR:

  1. Displays always output linear light, but the range is always limited to a range of display linear output.
  2. Scene linear light ratios will always be tricky to cram into the display linear range.
  3. An aesthetic photo is pre-canned already to the display linear range. Recovering scene linear values from a display referred encoding is impossible without additional information.
  4. In a DCC, the transform of other data or colours to the display linear output requires thought and isn’t nearly as simple as some folks think. What is an albedo? What is a false colour or false data rendering? What is a depth? What is a normal? What is an emission? Most importantly, how should each of those unique aspects be displayed for someone in a DCC application, and what optional renderings would be required for each? We are always needing to consider the data rendering of the internal model to the display.

Once one realizes that folks using a DCC application that mixes many different contexts, it becomes clearer that a single hard coded render transform is about the worst idea ever. Each UI requires its own discrete transform, and not simply the global transform on the image viewer output.

That’s pixel management.

I’m not so sure about that. Solid Mode is for modeling not for LookDev. Shouldn’t it just look consistent no matter your render settings?
In this case - Like troy said, if it uses filmic since sRGB OETF is a bad option, it probably should use some contrast setup.

However in any case the problem is, that the contrast for the default shading is just way too low - so whatever view transform is choosen, maybe the studio lights should be adjusted in some way.

Thanks for the extensive explanation.
How can lowering down the exposure depending on the highest value causes some saturation-artifacts as you described frequently elsewhere? Is it as shown in the gif (cutting off one of the R-, G-, or B- values) in my previous post, or should I represent if different. If I can handle it, I want to make short video-explaination “Filmic for Noobs - Introduction” with more visuals than text. I believe there is enough text available, but visuals will tremendously help to understand this matter a easier. At least I want to make a representation so that we all understand that we are dealing with a scene referred domain in Blender and that it is a good idea that the rest of blender can work with that data so that we have a bit more tools than the ASD/CDL-node in the compositor. Once users realizes that, there will be more wishes having such tools and hopefully Blender developers go more that direction than implementing more tools that break. (Mind that English is my second language, wish I could express myself a bit better).

1 Like

Lowering exposure doesn’t skew the colour representation, although many camera rendering transforms will globally skew them in subtle ways. The chopping of high values does indeed cause individual colours to skew, so a gradation of intensity of colour X, after getting clipped, will result in a different colour. This is pretty common looking at cell phone photography and even quite a few DSLRs; oranges turn yellow, greens turn yellow, deep blues turn cyan, skin tones skew yellow, etc.

Filmic too “skews” colours, however at a global level, which makes it more acceptable on the whole. Trying not to skew colours, while plausible, isn’t the most simple thing to generate a camera rendering transform to do, surprisingly.

The option has changed from “Default” to “Standard”, but this was the solution I needed.