Color Management

Wow, I didn’t understand just how wrong I was. Thank you.

Just experimented with the the glowing-ball scene and rendering out as Display Device: sRGB + View Transform: Raw and saving it as a 32-bit EXR. It looks rubbish when unprocessed, but doing a camera raw filter pass in Photoshop shows that the color information is there and I get the desired ‘accurate’ white glow on the center mass of the sphere with the gradient red color spilling onto the walls.

If I toggle to the SDR preview in Camera Raw Filter, it looks exactly like my AgX renders. So as you say, previewing the viewport using AgX is the way to go before saving a Raw 32-bit EXR. It does also mean ironically that when I’ve been saving out 16-bit TIFFs for my own work with the AgX view transform I’ve been getting the same SDR result, but with fewer steps, by mistake haha. Not ideal ‘professional’ workflow I know, but results are results.

Also @claus I’m happy for any and all discussion on the topic, go for it :slight_smile: I have previously looked at that Render Raw add-on, the ‘accurate’ mode for film grain looks most interesting to me over everything else. I’ve wanted to emulate that part on it’s own for some time.

They also only cook with water… no magic involved… expect maybe: they simple do not restrict themself on one single format/ color management and then get problems when a single person “screw it up” when using the wrong setting… or… they all know what workflow is/should be used.

It seems there are different opinions with different colleques… and i’m puzzled. Shouldn’t some “executive production manager” (?) be responsible to decide what is used after consulation of some “technical expert” (?)… or whatever the actual title may be ?? :question:

I think some “wordbattles” (exaggerated) between employes doesn’t really help to establish a proper workflow.

( Meanwhile: The example about PNG and EXR seems to to show some… “incompleteness” about the actual facts… so again: there should be some technical expert in the company… and someone who decides what the crew is using… see above “magician” :mage: at pixar… in person of Edwin Catmull:wink: )

I dont know if there is a list.

The problem are,that some grading nodes are developed for sRGB images in a 0-1 range.
If you have a open domain 32bit exr which can store ± 3.4 × 10^38 values in floating points,you can test for your self that some nodes which calc with 0-1 data produce instand clipping artifacts etc.

Another way instead of use the Agxlog would be to normalize the exr input values.(which we do with our custom compositing for the tonemapping curves,but these are then the final output btw)Then the data would be in a 0-1 range but you losing the hdr floating point range.

If you would normalize then you can not decode to Agx sRGB because you have to decode from the AgxLog values back to open domian before decode to Agx sRGB (which does the Agx log to Agx sRGB cst).

This means,if you normalize your data,you have kind of tonemapped the image already,which would be the job for the final sRGB Agx cst.

Hope that makes sence.

edit,I have not looked into the Agx log which conversation happens.However a log space is typical in a range of 0-1 it can be logarithm based or even normalized.Since the exact log encoding and decoding are coded in the ocio you
get the exact decoding from to in the CST.

No Problem and there must be some interesting conversations going on where you work at the moment :stuck_out_tongue:

And frankly, it should be this. A technical manager or director of visual effects or whatever, someone in charged really should have all this already set in stone and everyone just gets told to ‘do it this way’.

Now of course it’s possible you are at a fairly new and/or small company, where so far just ‘winging it’ has been fine. Maybe everyone largely just worked on there own projects and that was it. Or any multi-teams was still just one person per role, like some modeled, someone rigged, someone animated (none of those ppl care about colour management) and only one person did textures/lighting/rendering.

Hence the ‘growing pains’.

3 Likes

That’s pretty much exactly the reason. First, you have to have a camera that can shoot it. The bandwidth and storage required to shoot raw video is not minor, and then comes the matter of having to grade all the footage to match.

Using s-log2, c-log, etc - it gives you a foundation of “ok, I know this is where my range is going to be when I start editing.” What you see in the viewfinder is relatively sort of what you’re going to see in editing, so you can also light with a bit of comfort. You can still grade it (within margins), but you’re not starting from nothing. And your file size also isn’t so massive.

But in terms of 3D - yeah, I don’t see the point of baking in a look when you’re working in a team environment (using different software), and trying to match your render look to … something. Keep it as raw as possible, and kick the color to the next part of the pipeline.

1 Like

So while I can’t go into details for a variety of reasons, you’ve nailed it really. I joined a team that has rapidly expanded, with a mix of people from different backgrounds, specialties and experience. There is a team heading up the ‘do it this way’ documentation, which I’ve been asked to join as one of the few Blender representatives on the team.

Following on from my previous experiments, I hit a weird wall today: If I open a 32-bit EXR in Photoshop, I can use Camera Raw Filter to edit the colors nicely, and even have an SDR preview (the one I previously mentioned that looks exactly like AgX in the Blender viewport). However, when I import that same image into After Effects, set the project up for 32-bit with all the correct color settings, using Lumetri color to apply the same exposure, contrast, etc settings as the Raw Camera Filter in PS yields very different results. If I use very different values, I can make it look like the SDR preview/AgX imagery, but not the HDR version. So unless anyone here has some pointers, I need to go away and figure out a HDR workflow in After Effects.

All the while keeping in mind whether or not this is relevant to my workplace, ha.

1 Like

As i said … a real beast… from hell:stuck_out_tongue_closed_eyes:

1 Like

Sorry, can’t help there, as I don’t use Adobe software.

One thing I can say and this fits the ‘its a real beast’, is you can find that settings/LUT’s etc that may have the same name in different software, may not actually be exactly the same.

This is why what can happen is a LUT or colour space setting is picked from just ONE software that all like and it’s then copied to everything else.

Ideally this is where OpenColorIO support comes in and it’s config files. If all the software supports that standard, then you should be able to take the setting (or even your own modified config) from one and apply to all other software.

Done correctly, any raw file from any software could be opened in any other software on any other computer with the same LUT applied and it will still all look the same (assuming much the same monitor specs/calibration, etc, etc).

2 Likes

So, I have helped to write the standards documentation at my workplace, and we have come to a working solution, hurray!

Firstly, very grateful for everyone’s help on this. Here are some important things I learnt:

  • I didn’t understand how the AgX view transform was taking the data in Blender and giving me the ‘correct’ appearance of the bright light in my example imagery. Now I understand it is using the floating-point data within the Blender scene to add a curve to create that appearance. This is why you can add a curve/color correction to an otherwise ugly looking EXR file with raw/linear color data and get the same look, as EXR uses floating-point data.

  • Full 32-bit or 16-bit half-float, doesn’t really matter for my purposes, especially as I don’t intend to work in HDR. Edit: I am following on from my previous point about full/half-float EXRs.

  • If you save a 16-bit TIFF, it won’t work the same way, because TIFF’s use Integer data. So you can’t correct a 16-bit TIFF and get that bright white core of the sphere, as you would hope to see.

  • I had previously been working using 16-bit TIFFs with the AgX (or Filmic pre-4.0) view transform baked in. So the end result of my work probably ended up looking roughly the same with ‘correct’ behavior of light and dark, saturated bright lights etc. This gave me a false sense of correctness, yes my work ‘looked correct’ but maybe I wasn’t going about it in the best way.

  • This led to the confusion when sRGB Raw/Linear color spaces were mentioned at my workplace, but no mention of EXR and floating point data.

  • Going forward, I will be working in the viewport with the AgX view transform, as I think it gives a very good representation of the correct light behaviors and is a good example of what the EXR file will look like once the initial color correction has been applied. The Blender output file will however, be a half-float EXR, which has the view transform stripped out by default.

This conclusion is by no means final, happy to hear more thoughts on the topic :slight_smile:

3 Likes

you mean .exr I hope…

Exr will give you the floating point values to get the maximum out of your comp.
Anything else is a waste of rendertime imho. :wink:

2 Likes

Yes, that point is following on from the previous bullet point but that is maybe not very clear, so I have added an edit: ‘Edit: I am following on from my previous point about full/half-float EXRs.’

1 Like

If you’re gonna use AgX for pre-viz in Blender, I recommend the rest of the pipeline also uses AgX as it’s been known to skew reddish to pinkish which other transforms might not do. It depends on how color sensitive your work and colleagues are. You can do that with OCIO.

Also I fail to see why you couldn’t use a 16-bit tiff (though my preference goes to exr regardless). Could you explain a bit more?

1 Like

So I have put these examples together that hopefully demonstrate the issue.

Some notes:

  • These have been converted to 8-bit jpgs for this website, but they are still representative of the output.
  • @claus you are correct in saying that the AgX view transform example does give a slight pinkish hue to the red, thanks for that input :slight_smile:
  • This red glowing sphere in a white room is representative of a worst case scenario for color management/color data storage.

First up, we have a representation of what an un-processed EXR file, half-float/16-bit image looks like. In this state, it looks unusable.

However, the color information is stored in the floating-point data, so a quick camera raw filter in Photoshop (auto settings, with HDR turned off) shows this:


We get a realistic interpretation of what the bright red light should be doing: A white core with a red bleed/halo, like a lightsaber or neon light.

Then we get 16-bit TIFFs as @claus has asked about:


Here we see an almost identical image to the un-processed EXR file. So what happens when we try to run the same camera raw filter? (exposure, contrast highlights, curves etc.):


So now we see that this image, which uses 16-bit integer data, cannot be saved, the color information just isn’t there to bring back the glowing sphere.

Just to really make sure, this is what happens when we crank up the exposure some more:


The integer data does not contain enough information like the floating-point data of the EXR can, even though they are technically both ‘16-bit’.

There is a lot of literature out there for integer vs floating-point data, but it essentially boils down to:

Integer: Whole numbers only, in this instance RGB values of 0 to 255
Floating point: Decimal places, in this instance values are stored as 0.00000 to 1.00000. A higher fidelity of detail in the data.

Something important to note is that, that I am quoting from this reference is in integer, exceeding values are clipped, 75% grey + 75% grey = white (255). Lowering the exposure won’t bring back any values over white. Integer formats cannot store values above white. (This is effectively what is happening in our examples above).

In floating point, exceeding values are not clipped 75% grey + 75% grey = white (1.5). Lowering the exposure can bring back all values over white.

PNGs exhibits the exact same behavior as TIFFs, as they also work on integer data:



AgX and Filmic and other view transforms give us a way of working that now seems unusual after learning all of this. I had previously not understood how AgX was giving me the desired behavior of the bright light, but could be saved as 8 or 16-bit integer image types without the loss of information. It is because Blender (or any 3D program using a view transform) is using the floating-point data before then applying a curve type (AgX in the example below) and then baking that appearance into the image when you save it:


Note the hue change Claus mentioned.

And I think that concludes it really, I hope that makes sense!

1 Like

I see. I hadn’t heard of 16-bit integer data before. I always assumed 16-bit was half-float. I learned something today.

I know it’s definitely technically possible to output to half-float tiff, it’s just that Blender can’t (yet) apparently. I assume because it would be superfluous since exr is more optimized and preferred for this specific goal.

ps the hue shift is not necessarily a bad thing as it might aid with color perception in formed images.

1 Like

the hue shift is not necessarily a bad thing

Yes as long as you are aware and account for it, plus I quite like the desaturated soft look it tends to give.

I know it’s definitely technically possible to output to half-float tiff

Correct, see this Wikipedia entry. I learnt today that Adobe owns the copyright to the TIFF specification, so I’m wondering if the fancy 32-bit SGI LogLUV TIFF (what my colleague tells me it might be called) can’t be accessed unless you pay Adobe a fee.

And yes, superfluous when EXR is more of an industry standard.

reading that, it seems that TIFF is pretty much a container itself and not a singular format since they have the ability to store vectors, layers and jpegs. Might be why at the old job we had an interesting roulette wheel of outcomes when dealing with TIFF for prepress, as we never knew what we were getting.

1 Like

As some users meantioned,use EXR

i made a few tests

I stored a EXR direct as render to tiff 16bit follow scene and override (RAW).Its seems to lose some highlight values if i reload the tiff into blender with AgX tonemapping active.

Vs If i store the EXR file direct as open EXR 32 bit raw follow scene or override and load the stored files back into blender with AgX active they looking the same.

I have repeat the same procedure and stored the EXR image as open EXR 16bit follow scene and as override raw file (both options RAW to be sure no tonemapping is applyed).compression DWAA and override color space rec709 as the original EXR
They look identical as the original file after loading and switch between original EXR and the stored open EXR files.

Not sure about the tiff files why they lose the highlights.I guess,if no values are lost during storing it can be the auto float conversion Blender uses.Example, if the orignal scene values are over 1.floating point like 3 or more and Blender converts between 0-1 you have a lower dynamic range displayed.Its maybe a bug then.Just guessing.

About the tiff file,there is no need to normalize the image,Blender convert the image to float automaticly (above 8bit).


https://docs.blender.org/manual/en/latest/files/media/image_formats.html

I stored a EXR direct as render to tiff 16bit follow scene and override (RAW)

Are you saying you saved an EXR image from Blender, then reloaded it and re-saved it as a TIFF? If so, yes it will convert the data from floating point to integer. Pretty much the same as saving directly to a TIFF the first time, as Blender will be using floating point data when generating the render in the first place.

I have repeat the same procedure and stored the EXR image as open EXR 16bit follow scene and as override raw file (both options RAW to be sure no tonemapping is applyed).compression DWAA and override color space rec709 as the original EXR
They look identical as the original file after loading and switch between original EXR and the stored open EXR files.

Apologies, I don’t 100% follow, but when saving an EXR Blender’s default colorspace is rec709, so it will look the same as no override.

Not sure about the tiff files why they lose the highlights.I guess,if no values are lost during storing it can be the auto float conversion Blender uses.Example, if the orignal scene values are over 1.floating point like 3 or more and Blender converts between 0-1 you have a lower dynamic range displayed.Its maybe a bug then.Just guessing.

Not a bug, you cannot store values over 1 (for say the brightness value of a pixel) with integer file formats like TIFFs or PNGs:


Source for this reference.
All the values that go over 1 in this over-exposed image lose their detail when saved as a TIFF integer. Anything that goes over 1 is just recorded as 1 (like clamping data when working with nodes), while the floating point data can store data over 1, so when the exposure is corrected all the data for the different light values is there.

About the tiff file,there is no need to normalize the image,Blender convert the image to float automaticly (above 8bit).

In practice I think what this means is that when you import an integer image, say into the Shader editor, then run it through a color-ramp, math node or a curves node etc, Blender needs to define the values in that image as floating-point decimal-places. So I imagine it will just average out the data between pixels (I’m guessing a bit there). But you won’t magically gain data from nothing.

No i have loaded a EXR file for the test.
Then i have stored this EXR as 16bit TIFF as described.
Then loaded the stored TIFF back into Blender and compared the original EXR with the TIFF file.

You have a option to chose your color space if you save your override.Since this test was to check if they look the same as the original EXR i have select rec709.

Yes this was my observation.

This is the question.If this autoconversion on load normalize the integer to 0-1 as example.Or if this conversion is the right behavior for TIFF files.

In theory you could multiply the float image back to integer max value and convert it to your float you need.
Maybe a direct multi to 3 or whatever the max scene value was should work as well.(in linear space ofc,would be the same as increasing gain in Resolve)

If something was clamped on the other hand then data is lost and can not be recalculated.

Interesting, if lengthy explanation of color management in Blender from BCON 24.

A point I found quite funny is around 25mins and again at 34mins 30s, where he uses DaVinci Resolve to properly set the input and output color space and gamma. Trying to do the same thing in Adobe applications is a pain in the backside!