Using OCIO for color management

I found this interesting discussion and tried it out.

Blender recognizes aces colorspace and everything sort of works. But I have stumbled upon 2 problems.

First- Choosing images colorpsace is difficult, because the list is so long, half of it is hidden and no way to scroll or get to desired settings

Second - Nodes using color sliders get messed up

Does anyone has any experience with this?

It looks blender doesn’t support the “family” tag within the config.ocio. By using this entry key blender could sort the different color spaces.

Alternatively you could edit your config.ocio and remove any colorspace you don’t want to use.

1 Like

@William has this been assigned on any tasks on development before the 2.81 landing?

Thank you for sharing @Unizaga , I was just wondering how color management had been progressing on 2.81

Did some testing with Blender and Aces, seems to work okay.

I also attach the modified ocio config which has only basic colorspaces to keep menus from being gigantic
config.ocio (9.0 KB)

It seems to work but there are some things to keep in mind when using the ACES OCIO:

  • Every image texture needs to be set to the correct color space (gamma as well as gamut) for correct input conversion
  • The HSV/HSL input of the color picker (e.g. Input RGB Node) as well as the color wheel and slider doesn’t work correctly and colors go completely weird, as they are not correctly converted,
  • RGB inputs do work but expect the values put in to be in ACES gamut, as also stated in the blender docs:
    " color_picking
    Defines the distribution of colors in color pickers. It is expected to be approximately perceptually linear, have the same gamut as the scene_linear color space, map 0…1 values to 0…1 values in the scene linear color space for predictable editing of material albedos."
    see also:

So for using scenes set up previously with the standard blender OCIO configuration you need to manually convert every RGB-value entered in shader nodes to work correctly with the ACES configuration manually.
It would be nice to be able to set the colorspace the color picker expects its input values to be in, like you can with image nodes. (maybe this is also possible by specifying the right conversions for role_colorpicker in the ACES ocio but I didn’t get this to work correctly, only for changing gamma but not gamut)

Also I remember @troy_s mentioning some nodes you should avoid, as they introduce some unwanted color conversion due to color space stuff being hardcoded into their source. I don’t recall whether this also applies for any of the nodes available in the shader editor.

Color wheel feels actually more natural with ocio to me. It is a bit more stuff to keep in mind while working but I like the result I get so I don’t mind.

From 2017:

If you rebuild your ACES configuration via Python, you can help this out a little bit by putting the proper displays into separate displays. The default ACES configuration chose the meatheaded way that existing software did things (wrong / poorly) as opposed to how OCIO was designed. Also, software needs to support the OCIO idea of “Families” to make it more manageable.

That’s due to broken logic that was coded in, despite warnings from some raving idiot. If you search the tracker, you will see a never ending list of entries where I suggest that the UI needs to be managed, and based on the audience selected transforms. There’s a long bit of reasoning there, but I won’t bore anyone here with it unless it is of interest.

See above. Le sigh.

Tied to above. ACES has a poor selection of roles, and it was discussed recently. The main issue is that the UI needs to be managed.

OCIO has some API functionality to try and match colour transforms to filenames. I wrote a patch as an example years ago, but no one cares. There are plenty of folks that truly believe that Blender’s colour handling doesn’t have crippling problems. So here we are.

The issue is that it is up to you folks, the culture, to understand what you are doing enough to make sure the software and developers pay attention and listen. We aren’t quite there yet, but we sure are a million times closer in recent years thanks to you folks doing the heavy lifting of learning and helping others.

The TL;DR is that Blender is still hard wired with completely incompatible pixel math from hack software like Photoshop. That is, the Mix node uses strictly hack display referred math. For scene referred emissions, the formulas are complete garbage and have no place in Blender by default. They simply do not work. You can look at the blend math in the Adobe PDF specification on or around page 320 if memory serves. Also, with regard to the colour transforms that are hard coded, they include:

  • Sky texture
  • Wavelength
  • Blackbody

Those need to be fixed, and @lukasstockner97 did some work towards that. Not sure if they have been fixed yet. I’d doubt it, and even if they were, they need testing.

So by default, plenty of nodes are fundamentally broken. But it is even more nuanced than this, in that the context of what everyone is trying to do dictates the appropriate math.

Some examples:

  • If you are operating on a rendered scene, with scene emissions, the value range is some extremely low value to some potentially infinitely high value.
  • If you are operating on albedos, the colour ratios represent a percentage of reflection from 0.0% to 100.0%, plus physically implausible values that absorb or return more energy than input.
  • If you are working on data, the data isn’t colour information at all. An alpha channel is a good example here, where it represents a percentage of occlusion. But how should it appear visually, despite being data? 50% alpha will not have a 50% perceptual appearance. Same goes for strictly non-baked-into-data visualizations of other data forms!

It’s a deep rabbit hole. Fixing the software isn’t complex. It requires pressure and willpower and most importantly, an informed and educated culture to properly demand and rigorously test it.


i don’t know if it is the “right” solution or not, but (if i remember it correctly) changing the color picking role to output sRGB rather than rec.709 makes the RGB slider works more predictable,
But later, i change the color picking role again to ACES - ACEScg (but i forgot the reason why i change it that time), and this config is what i used nowadays

For the time being you could use sRGB values on the inputs and convert them to ACES2065-1 gamut with this node matrix:

Take the values from this website:
set to e.g.:

1 Like

I wonder, for textures (basic srgb jpegs etc), is it good enough to switch color space of the texture node to acescg or does it require some more work(outside of blender?)?

Also, the list of broken, aces incompatible nodes and whatnot in blender does make it seems like it’s somewhat safer to stick with blenders current filmic defaults :confused: …

It is quite easy to make custom list of color profiles you need. I added one modified version to this thread you can try, just replace the config file in aces folder.

For cycles it is enough to change colorspace in image texture node. For other stuff it is safer to convert. You can also use blender for converting images to other colorpsace.

1 Like

One problem I encounter though is that setting up OCIO env for windows also applies it to other software. I’t be nice if Blender had it’s own path in settings for ocio config. That way modifying ocio config file to suit Blender wouldn’t mess it up for other softwares .

What bothers me is this bug or whatever that makes weird color shifts with exposure.

overexposed emission colors vs non overexposed

this is a basic test replicated from koola on unreal engine 4 (UE has this behavior fixed somehow since)

this is with ocio, without it, blue doesn’t become pink but red also doesn’t become orange-ish and it’s all just kinda wonky colors.

These weird color shifts coupled with pretty shitty sun and sky system in blender make it super difficult to do some bright sunny day exterior imho. At least I haven’t really seen much good exterior archviz done using cycles.

Or am I wrong, is this somewhat correct behavior, am I doing something wrong? The whole topic of color spaces and tonemapping can get super confusing at times…

With renderers like fstorm, unreal engine or octane things tend to just work as expected when it comes to colors but with cycles it’s always somewhat weird.

Correct! It requires using wider gamut sources the entire way to maximize the gamut.


ACES nor the Hable Filmic do not apply anything beyond the most basic gamut mapping transform via a matrix. It’s a glaring error of judgement, and leads to completely unbalanced imagery. Filmic Blender has a very rudimentary bit of gamut mapping.

With the others, there is no gamut mapping along volume, so you get skew and shift at every turn. The colours shouldn’t skew to cyan nor pink, but rather map to the closest chromaticity within gamut. Again, none of them do that because they are all essentially basic transfer functions, without any gamut mapping operation. The magenta skew in this case was a direct problem with OCIO, but the “working” version also has skew, which is equally hideous.

So is there a correct way of handling color/material exposure or do you need some sort of custom color matrix to correct colors at each “step” of exposure?

Afaik UE4 does it’s own workaround (other prominent renderers probably as well) doing some custom color matrix or whatever. I mean, both fstorm and octane are clearly doing their own thing(UE4 too) but is this a customized interpretation or a correction or is there a correct way to handle colors all the way (no matter the input or process)? Is this a post process(tonemapping issue as well?)

The biggest deal here for me is that, no matter what I throw as an artist at UE4, fstorm, octane(for example) just looks and behaves kinda as expected in different lighting situations… With blender I feel every material needs adjustments depending on lighting situations… (colors get overexposed, cast weird color bleed in GI or look dull in underexposed situations…etc…)

It’s not on either Unreal nor ACES nor any of those other projects roadmap. No one seems to think it’s a problem. Lack of gamut mapping is a huge problem in my mind. I’ve brought it up with a number of ACES people, but alas.

I don’t think so? Blues turning cyan, greens skewing yellow, and deep reds turning magenta are all part and parcel of folks that ignore gamut mapping.

I’m confident it happens in all of them. Filmic Blender does try to gamut map, but there are better things that can be done, if I ever get the next project finished.

Gamut mapping, in the end, is a creative decision.

First, this is an excellent question and is really a sort of “next level” question, given a majority of people are just now getting to terms with scene referred emissions versus display referred, etc.

Second, it’s an open ended creative question. What do you think should happen? At some point a colour exceeds the destination gamut. What should a pure BT.709 blue do when it exceeds the destination gamut? Should it skew cyan? Remain blue at full intensity? Those are the real problems present in both ACES and Unreal.

Tone mapping is a bit of a horrible term. In the end, it’s just another transfer function from the scene referred to display referred. It takes an input value and spits out an output. Input 2341.0, 0, 0 and getting out 1.0, 0, 0 doesn’t solve the gamut problem. It also doesn’t solve the gamut problem if the source is a wider gamut rendering working space and the destination is much smaller; that too leads to broken and over saturated / posterized imagery.

Look closer with the above information. It also looks like ass. The software needs to do better. Also note most of those aren’t even using wider gamut rendering spaces.

But again, I encourage anyone reading this to think about what they think should happen with out of gamut values, assuming they understand the issue.

Well, I don’t think I need to tell you this, but most of people using “the software” will work with whatever they get and if they’re not happy will move to another one cause it’s just hard to even understand what’s wrong going on behind the rendering.

I see what I like in another renderer I try to emulate it. And I think many people do.

And what’s worse, I think many people trying to implement “color science” into their renderers do the same, emulating what ranges and how camera manufacturers interpret them in their raw data procesing into rendering etc(many just being a limitations of the hardware). On other hand, that’s probably your best bet, show data how people are used to and accept it.

Blender being open source having input from all over the industry could in theory adopt or even be a pioneer of some “proper” workflow, but really what would that be…

As an individual artist I can only say that predictable results would match what I’m used to from DSLR’s or simiral but I have no way of knowing what’s really correct. It’s kinda just sad that there isn’t a proper way to go about this yet I think.

The best entry point is to evaluate a simple DSLR or cellular phone photo.

I would copy paste the image here, but it isn’t mine. Have a look at this extremely common photograph result.

What do you see?

Well, poor white balance and small dynamic range :smiley: