Filmic is on EVERYTHING in the compositor (not just render layers)

If you use Filmic it effects EVERYTHING in the compositer - not just the render layers. This means if you want to work ontop of some live action footage it effects that, greying any whites. It also means it’s really hard to have a pure white background - on filmic you have to set it to like 100,100,100 where 1,1,1 SHOULD be white. This means any anti-aliasing around the edge of the white background is lost. - any small black text for instance, Is there a way round this?

1 Like

Mixing scene referred linear with compressed sRGB is the most likely source of the problem. The best way round it is to use scene referred linear footage and make good use of your CDL node.

…i didn’t understand that. Maybe post and image of the node tree id need…?

Welcome to color management! Everything sucks.

I’m sure there are inaccuracies here; I’d appreciate if anyone actually reading it (lol) points them out to me so I can correct my understanding :slight_smile:

Filmic is not, in my understanding, designed to be integrated with live-action footage.

Here’s what is approximately happening pipeline-wise to screw up your scene (it’s complicated):

  1. Cycles spits out meaningless 0…inf values, which are given meaning only because you’ve designed your scene whilst viewing it through the lens of a transformation (Filmic) that picks out some range 0…n, which go through a “transfer function” to become (almost) viewable intensity values between 0…1. Filmic uses a huge range of values, which is close to the range of light intensities we can see with out human eyes.
  2. These values are of course RGB values, which are given meaning only because you’ve designed your scene whilst viewing it through the lens of a transformation (Filmic, again) that defines “white”, “red”, “green”, and “blue” as very particular objective colors, as defined by experiments done in 1931. Filmic uses the same four definitions as the very common sRGB standard.
  3. These meaningless 0…inf values, created under the assumption that they would be transformed (given meaning) later by Filmic, now enter the compositor after rendering. They are just data, designed under the assumption that they would be transformed eg. given meaning by Filmic.
  4. Your camera spits out meaningless RGB values too, which are given meaning by the camera manufacturer, which we can recover by examining the “transfer function”, “white point”, and primaries (objective color values) of the color space that the footage is stored in (sometimes found in the metadata of a video file).
  5. You now input your footage. Blender now takes these meaningless values, determines their meaning (color space), and should ideally now use this knowledge to transform the data into the Filmic color space. But there is a problem: One can only go FROM Filmic TO other spaces like sRGB, not the other way around. So, Blender does the best it can; it transforms your file into linear sRGB, which happens to have the same primaries but a different transfer function (mapping from 0…n to 0…1) than Filmic.
  6. When you layer your live-action footage together with your render, you’re mixing a render (data that takes on the intended meaning when transformed with Filmic) with your footage (data that takes on the intended meaning when transformed with sRGB). Obviously this is gonna end badly.
  7. Finally, the combination gets processed by Filmic. Your render looks fine, as its assumptions were respected, and thus it looks as intended. However, your footage looks wrong, as it was never transformed to the Filmic space, but has now been transformed from the Filmic space - for example, a white original “1” from your live-action footage is interpreted by Filmic as “oh, that’s a really dim part of the scene!”, and given an output value that matches that interpretation, when it really should have been interpreted by sRGB as “oh, that’s as bright as bright can get!”.

Problems:

  • We can’t transform live-action footage into the Filmic color space, only from the Filmic color space to other spaces (like sRGB) using specially designed LUTs. I don’t know if this is possible, as the “crosstalk” element of Filmic (which is great, don’t get me wrong) feels like something non-invertible. This would be ideal, but alas.
    • You’d kind of need to do this on a per-camera basis, which is… Well, technical and hard. Maybe it’s possible. I’m not gonna do it.
  • There is as of now no node to transform a rendered result into a different colorspace. If Blender were more flexible, we might transform our render from Filmic to sRGB, then composite happily knowing everything was sRGB, before outputting to an sRGB file (performing no color processing on output).

So how do you fix it? It’s not very elegant, and not really correct, but it will work and (probably) produce pleasing enough images, as Filmic is designed to (in the end) integrate well as an sRGB image.

  1. Render your result in a scene where the Color Management is set to Filmic.
  2. Save this result to a file, WITHOUT applying any compositing, ideally with the “Base Contrast” profile (which mostly matches sRGB). This will apply the Filmic color transform, creating LDR values that mostly match sRGB (good enough!).
  3. In a new scene, where the Color Management is set to sRGB (or, well, whatever, just not Filmic), input your file into the compositor, and make sure it’s input as sRGB. Composite away, all is good now :slight_smile: .
  4. NOTE: If you save to EXR, the original scene-referred data is saved, and it’ll seem like the problem isn’t fixed. This is still desirable for archival / comp purposes - you’re gonna need to either also save your render as a different format like PNG or TIFF, or manually invoke OpenImageIO with the Blender ocio.conf to create an EXR file that’s been processed by Filmic (kind of gross to do manually).
  5. NOTE: When not doing the manual OCIO transform on an EXR file, you lose all the lovely HDR data that Filmic didn’t squish into your file.

What should one do, in an ideal world? In my opinion:

  • One would transform the live-action footage into a color space that’s well-supported by real world cameras.
  • This color space would handle a high dynamic range, to take advantage of a large range of values.
  • This color space would be highly wide-gamut (representing close to all colors perceivable by humans), so that that the scene we build, the footage we shoot, and the final product we produce, has all been created while taking into account close to as many colors as humans can see (even though our displays can’t display very many of them yet).
    • This is more of a future-proofing thing - though, on the other hand, modern iPads can display quite a few colors. A little color management makes your product look that much more vivid on an iPad :slight_smile: .
    • For more serious work, doing color management properly is what lets you remaster your work later, when display technology inevitably improves. There’s a reason we keep re-scanning film negatives at higher resolution and higher bit depth, with better noise reduction and color science; one is because a 35mm piece of film has, in actuality, a very large gamut. Digital color work done in sRGB will forever look bland on displays of the future (aside from the “vividness” cheating common in modern wide-gamut consumer televisions, ew), while digital work done in a wide gamut will look not only great but as it was rendered/shot on the displays on the future.
    • Too wide-gamut isn’t good either - we don’t want to start producing imaginary colors, for example…
  • This color space would have standardized ways of mapping its 0…n values to various real-world 0…1 outputs, assuming real world capabilities of monitors to display colors at a real world max brightness, so that we can actually work with and reap the benefits of HDR (as Filmic does) and wide-gamut using our deficient monitors of the present!

This (almost) exists! This is the ACES colorspace pipeline, which is a complex beast that isn’t perfect, but which gets the job done.

  • Tons of cameras have "IDT"s, for getting footage into ACES. These are provided directly by the manufacturer, in most cases; else, cooking a decent one up manually isn’t the craziest thing in the world.
  • ACES, supporting HDR data, covers all colors perceivable by humans, and some not. It’s a great archive format, and is super easy to transform to/from ACEScg and ACEScc.
  • ACEScg is a linear (2x intensity = 2x data value) space covering less colors, but which is easier to work with and still wide-gamut. It’s made to be a color space we can assume that a renderer is rendering to when building a scene, for easy integration with any other ACES images.
  • ACEScc/ACEScct is a non-linear space covering the same colors as ACEScg, but which is easier to color-correct with (in general, a doubling of the data value does not double the perceived brightness; ACEScc/ACEScct is one of many spaces to correct this quirk by making a linear change in the data reflect a linear change in perceived brightness).
  • ACES LMTs let you apply “looks” to ACES images.
  • The ACES RRT + ODT (controversial, as it contains what could be considered artistic choices) transforms data from ACES to… Anything! sRGB, DCI-P3, etc. even incorporating various maximum-brightness assumptions made of the display :slight_smile: .

I personally use ACES in Blender for my work already, using a custom OCIO configuration that passes the difference check with the corresponding DaVinci Resolve pipeline. I do hope it’ll be implemented one day more properly, however; it’s really nice, and Blender deserves to have a proper, widely-supported wide-gamut interface.

Caveats? I don’t think things like crosstalk couldn’t be done as well as Filmic under a canonical ACES pipeline (as I understand it you’d have to, ahem, cheat by modifying the RRT+ODT, as the emulated effect in film is very dependent on the output device) - and right now, all sorts of knobs in Cycles (like the blackbody node) and in Blender generally have to be avoided, due to lots of baked-in sRGB assumptions of varying quality.

8 Likes

It’s actually fine for such a thing.

The issue with combining live action footage has very little to do with a specific camera rendering transform. It has everything to do with how the footage is encoded.

If you have a deep enough encoding and know the transfer function, you can indeed invert back to scene values. The problem is that most consumer cameras don’t provide this sort of a transfer function to get back to scene reflectance.

Fuji’s X-T3, Panasonic’s G series, some of Sony’s A series (albeit at 8 bit), and now Nikons can all output a video encoding that is based on a scene referred log encoding. This is crucial to be able to go from the normalized camera referred encoding back to scene reflectance and emission. If you have a generic DSLR that plops out a “ready to view” video encoding, you won’t be going back to scene emission / reflectance easily, although it is theoretically possible to come up with a reasonable approximation.

Backwards! You want to take the camera footage into a set of reflectance / emission ratios in your CGI.

The IDTs in ACES do exactly what I suggested above.

The main showstopper for ACES is that there is no gamut mapping and as a result, the imagery it produces is rather nasty. Even basic FIlmic has a gamut mapping.

Not always from the vendor. The IDT however, is exactly how you take the camera data to your rendering working space. Note that you won’t find any consumer DSLRs in that list for the above reasons.

Except you don’t work in ACES AP0. You work in AP1, which is more or less BT.2020. It’s not magic.

All rendering working spaces must work in a radiometrically-like linear working space. Of course, RGB is inherently limited, but even Cycles works under an RGB radiometrically-like working rendering space.

All nonlinear encodings, when using values as-is, will result in nasty looking problems if you perform any sort of manipulations under them. This is why radiometrically linear is an absolute must for all manipulations. This is a UI problem, not an encoding problem. Sadly, folks carry on doing manipulations using the nonlinear encodings before decoding them back to radiometrically linear for manipulation.

Resolve still is totally broken for ACES use, as it uses ACEScc as the working reference space, which is nonlinear and problematic.

But again, the number one problem with ACES is the complete disregard for gamut mapping, and it yields all sorts of broken imagery, especially when rendering.

Hope this helps. It’s sure nice seeing these sorts of posts in the forum.

5 Likes

Thanks for all technical details.
But, if I read the first post, it seems to me that @yogyog meant something different.

In the compositor, after adjusting scene referred values (for example with ASC-CDL), we come to a point to we now want to pixel-push and say thank you to the job Filmic did for us.

I wish there was a node like “Pixel Push after here” or “Enter display referred domain”. So after that node, we can than boost the white until it’s clipping without further use of filmic. Otherwise, we export our render to a pixel push editor and edit the levels so that white is white, (if we want).

2 Likes

This is the sentence I believe both the previous post and my post was directed at @anon72338821.

It is sadly not common knowledge on how to properly composite motion picture footage with CGI within the capabilities of software like Blender. While “linear workflow” is probably as overloaded, overworked, and nightmarish of a term as “gamma”, the nuances of which linear is still confusing for plenty of folks. Those specific camera encodings are a crucial part of sorting out the mess.

The display rendering transform is crucial, but it is really interpretations on scene radiometric-like quantities.

The compositing and work should all happen long before applying the display rendering transform.

When the proper camera decoding is applied, it is essentially virtually identical to the same idea and usage as an HDRI, and you want to roll all of the values out the door together.

2 Likes

I feel some colour-space conversion nodes would be able to solve this.

1 Like

I found this…

image

I feel that if we knew the actual formula for converting between sRGB and filmic we could set up a node group that would be like

and we could un-filmic images and video in the compositor. Or use a node to turn the render layer filmic. Anyone know where we could find such a thing?

2 Likes

Re-creating the sRGB to Filmic Formula using the data in the box…

y = 0.819494 + (0.1784551 - 0.819494)/(1 + (x/0.1397711)^0.9491565)

Let’s simplify…

y = 0.819494 - 0.6410389/(1 + (x/0.1397711)^0.9491565)

It’s a little ugly, but I think we have a working meathod here… I just have to convert that into maths nodes… :confounded: :thinking:

Isn’t it actually a bug?

The filmic is “view transform”, not “render transform” or anything else. I would expect it to only be applied for viewing (on ui level) and saving files with “as render” on.

I’m pretty sure I’ve seen somewhere in manual that compositor works in internal linear space.

2 Likes

I think it’s a poorly implemented feature. I think if Filmic had come out at the time the BI was working on Tears Of Steel this would not have been an issue.

And here’s the inverse formula from the same website…

y = 382683.8 + (0.1621314 - 382683.8)/(1 + (x/2.569263)^10.40694)

Read my last post again.

ToS would have used it as a view transform, as has already been demonstrated on Twitter with the footage.

You are thinking about this issue completely backwards.

In order to composite correctly, all values must be radiometric or the results will be broken.

The radiometric-like RGB output from a render enters the compositor, which is a good thing. Even if saved as an EXR, the radiometric-like ratios are preserved from a render. This is a good thing.

The problem is sRGB encoded imagery is not a radiometric-like transform, and cannot recover scene radiometry from the “this looks nice!” encoded state. It’s rather impossible without knowing what was done to the system. The same applies for most consumer DSLRs or footage that is “ready to be viewed.”

So again, your entire thought process on this is backwards. Compositing always wants radiometric-like values to composite correctly. Display linear is not sufficient.

When compositing scene radiometry, it therefore becomes mandatory that the work is viewed through an appropriate display rendering transform. The sRGB EOTF when used directly is inappropriate for this.

The problems you are facing are:

  • Inappropriate mental model for compositing
  • Inappropriate sources for compositing against radiometric path tracing results.
  • Inappropriate interchange file formats. Always use EXR.

The compositor uses the assumptions that come with path tracing and manipulation of light ratios, which is scene linear, radiometric-like output within the limits of an RGB encoding system. Any other encoding in a pixel manipulation application will yield broken images.

The TL;DR is that not all cameras provide a means to get back to scene reflectance, and are completely unsuitable for compositing with path traced work. It would be an utter hack to do so.

I am willing to help anyone who finds the right questions to ask. It requires a bit of effort on the other side to appreciate what I am trying to explain. The links and explanations are provided to help guide someone interested in doing so. If integrated correctly, the results are an order of a magnitude more compelling with less effort.

2 Likes

Hope this helps. It’s sure nice seeing these sorts of posts in the forum.

It’s even nicer seeing a knowledgeable person such as yourself taking the time to correct a novice like myself :slight_smile:

Thank you!

It’s actually fine for such a thing.

So what you’re saying is, that if we can “simply” retrieve radiometrically honest values from our footage, then comp that with the already honest Cycles values, then we can apply Filmic to the final composite of Cycles + live-action footage?

That’s pretty cool :slight_smile: I’ll have to try it! I’d love me some nice highlight rolloffs.

Resolve still is totally broken for ACES use, as it uses ACEScc as the working reference space, which is nonlinear and problematic.

It makes sense to me why they did this; for all its flaws, it seems that plenty of veterans like grading in a non-linear space. Maybe they’re used to working in more limited environments with log-encoded DPX?

I did enjoy the feel of, for example, Linear RGB curves in Blender under an ACEScc WS. Of course, the correct solution is to make a better UI for that particular tool, instead of bastardizing your entire working space and breaking all the math so that your curves tool feels better.

The “neon-blue issues” sprinkled around the ACES forums are more unforgivable in my eyes. My working space should never be restricting entire classes of visual effects, and that goes for Rec2020 too.

Sadly, folks carry on doing manipulations using the nonlinear encodings before decoding them back to radiometrically linear for manipulation.

Maybe we should just do what Darktable does and stick to LAB :slight_smile: . Agh.

I have thoughts on what to do in practice.

A “Bastardized ACES” Idea

AP0 is unworkable, AP1 has problems. I wonder, whether perhaps performing grading and compositing in something that has less problems, like an SG3 gamut, whilst otherwise having the entire pipeline be interchangeable with ACES, could be a good in-house way to optimize?

Locked Working Space:
WS = Linear Sony SG3

Ingestion:
Footage --> IDT --> ACES2065-1 --> Color Matrix --> WS

CG Rendering:
Renderer --> WS

View Rendering:
Composite --> Color Matrix --> ACES2065-1 --> ACES VT --> Display

DI / ACES Interchange:
Composite --> Color Matrix --> ACES2065-1 --> ACES-compatible DI

One can still perform further manipulations on this DI in ACES - but something like a CDL would need to itself be baked to a pure-ACES CDL to be transferrable.

A Filmic Idea

Filmic is awesome, I wonder if we can keep it without losing the wide-gamut stuff? :slight_smile:

Locked Working Space:
WS = Linear Sony SG3

Ingestion (we steal ACES IDTs for our own purposes):
Footage --> IDT --> ACES2065-1 --> Color Matrix --> WS

CG Rendering:
Renderer --> WS

View Rendering:
Composite --> Matrix to ACES2065-1 --> ACES2065-1 --> Perceptual Gamut Map to sRGB --> Linear sRGB --> Filmic View Transform --> Display

DI / ACES Interchange:
Composite --> Matrix to ACES2065-1 --> ACES2065-1 --> “Filmic DI”

A Blender Idea

Just spitballing, what if we spiced up Blender with a custom wide-gamut workflow that built on Filmic and the strengths of ACES?

  • BlendGamut: A wide gamut where useful colors like neon blue aren’t out of gamut (but which isn’t so wide as to be painful to render in).
  • Blender _ Map: A custom gamut mapping from XYZ to linear _, chosen for aesthetics and to match with other Blender _ Maps.
  • Filmic _ VT: The Filmic view transform mapping, from linear _ to a display-referred _.

Locked Working Space:
WS = BlendGamut

Ingestion (we “steal” ACES IDTs for our own purposes - can we do this? Why not :slight_smile: ?):
Footage --> IDT --> ACES2065-1 --> Color Matrix --> WS

Imagine a list of cameras in the Input node, which allows picking from pro cameras like the F35 and Alexa, performing a transform to BlendGamut by hijacking ACES IDTs.

CG Rendering:
Renderer --> WS

View Rendering:
Composite --> Matrix to XYZ --> XYZ --> Blender sRGB Map --> Linear sRGB --> Filmic sRGB VT --> Display (sRGB)

For other displays than sRGB, the Map and VT would have to be redesigned.

Doing the mapping from XYZ in two steps allows exchanging the Filmic _ VT for a hypothetically different artistic style of VT. Ex. not everyone wants crosstalk.

Interchange:
Composite --> Matrix to XYZ --> XYZ --> “BlendFilmic DI”

BlendFilm DIs would be interchangeable with anybody using a Blender _ Map and the Filmic _ VT.

Hypothetical "Blend DI"s would be interchangeable with anybody using a “* _ VT”.*

Just a thought :slight_smile:

Exactly!

And that is a critical reason why, some cameras provide log encoded footage!

What plenty of folks miss is the rather subtly “invisible” relationship they have with their displays. That is, code values don’t carry any meaning until we give it meaning. So, for example, 0.783G has literally no meaning until we say “this is an sRGB value for the greenish BT.709 light”.

And where does that actual meaning come from? A theoretically ideal display called sRGB and outlined in the specification. That is a very nuanced way to realize that 0.783G is literally referring to a display, or more commonly, display referred; the meaning of what the code value means is derived from what it refer to.

Why focus on this? Because rendering under BT.709 on a path tracer and applying a 2.2 power function, or the proper sRGB EOTF^-1 to the radiometric-like values is fundamentally wrong! The design context of such has no relationship to a display. That’s a creative choice, just like pointing a camera at something versus not. What range of scene radiometry meets the creative goal? Are there other contexts and design needs at work?

Knee jerk slapping the sRGB transfer function on radiometric-like emissions, as people are slowly realizing, is a creative choice. How those values are transformed for the sRGB display is important. Yes, the scene and the display contain radiometrically linear ratios. No they are not equivalent, as the scene needs a transform to gamut map it to the display, and also creative “this is content for consumption” tweaks.

Now we can reverse this and realize that a series of extremely complex things can go into a final “this is a file that contains code values prepared for an sRGB display”. If we invert the well known sRGB transfer function, or the more inexpensive sRGB-sorta transfer function via a pure power transform, we get back to… display linear. But that has all sorts of the previous mangling and warping in it still!

We cannot go back to the scene’s radiometry without careful consideration of our input.

Phew.

Again, it’s sort of mind blowing reading these sorts of awesome comments here! It shows a real elevation of the general watermark of the concepts.

In Resolve’s case, they have some serious issues on their hands. In ACES, some other issues.

In particular, Resolve makes 30000 USD consoles with little knobs that say “RGBY” etc. on them. That means they need the software to work with those consoles and paradigms, which means no one figured out how to make the ACES pipeline work. The quickest and most direct route seems to be breaking the advised ACES protocol, and making the working reference space nonlinear ACEScc. Go figure.

While the problem makes sense in that it seems it is challenging to operate a curve on scene linear data, the problem is actually relatively simple to solve.

Bringing it back to Blender, the curve widget is equally problematic. The scene linear data goes into it, and you can expect challenges controlling the mapping of the data.

The solution is to set the “space” that the curve operates in. That is, selecting the transfer function for an emission, such as Filmic Log, then inverting as transfer functions are perfectly invertible, solves the UI problems. Working on normals or depth? No problem, simply set it to an appropriate data transform such as “Non Color Data” for transfer.

It’s all gamut mapping. ACES has the “brute force” matrix approach, and it is fundamentally problematic. For starters, cameras use bogus 3x3s to get to artificial primaries. So right out of the gate the approach yields bogus non colours, which posterize at the spectral locus. Then a wide gamut posterizes worse against the smaller gamuts. It’s a huge problem that I’ve been banging a drum on since ACES 0.3 or so, but alas, here we are.

That means every single image for sRGB out of ACES is mangled up three fold, creating an over-saturated and broken output.

Once the pixel pushers spend time building up the knowledge, it becomes less and less acceptable.

Completely unironically, a primary developer is working on removing all of that Lab garbage as I type this. See rule one way up: Deviate from what RGB means, and you deviate from radiometry, and every single manipulation is broken.

As for the other points, you’ll just have to wait for something to have a decent RGB set of primaries and full gamut mapping. I wonder if something is being worked on… :wink:

ACES too might have a little solitary bird working on the complex issues of gamut mapping too. The problem is larger though, because cameras are the first point of failure via that 3x3. Why?

Because RGB is fundamentally broken. The next era of image manipulation will indeed be spectral.

1 Like

I suppose I have a question. It relies on my understanding of sensors, though, which may be faulty:

  • Camera sensors are just a bunch of “photosites” with a CFA usually filtering incoming light into an RGGB Bayer pattern. The CFA defines the gamut of the camera.
    • Side note, I’m a huge proponent of actually calibrating any camera + lens combo with a color chart, as opposed to relying on the “one size fits all” 3x3 matrix provided by nice manufacturers in metadata. Essentially deriving the true gamut of our actual camera.
  • Each photosite is capable of sensing an amount of photons, captured over the time determined by the shutter speed, within a range determined by the true ISO.
    • Firstly, the CFA will (probabilistically) filter some % of photons, depending on their wavelength.
    • These photons essentially excite electrons within the photosite, producing a small voltage. Noise can present due to the electrical nature of this step; but as an astrophotographer will tell you, it can be filtered out to an extent using darkframes and (for photography) stacking.
    • The 0…1 code value spit out in the raw Bayer image are given by: (photons_sensed - PHOTONS_MIN) / (PHOTONS_MAX - PHOTONS_MIN)
    • This means, some channels may clip, while others will not. Last-ditch highlight reconstruction can use this fact to help out later debayering by supplementing channels missing data using data from other channels (which looks best, if not quite accurate, when you also desaturate the highlights).
    • Side note, many sensors will trigger the shutter process on a per-line basis from top to bottom, creating rolling shutter artifacts depending on how fast the photosites are triggered.
    • There exists a theoretical “noise floor”, where random noise and actual “signal” become indistinguishable, which lies above the actual lowest amount of detected photons.
    • So, the actual “dynamic range”, is in practice lower than the expected log_2(PHOTONS_MAX - PHOTONS_MIN).
    • The true resolution, after reconstructing an RGB image after demosaicing the BW bayer image, probably lies around 70% of the advertised resolution of the BW image produced by the sensor. This is why I’m very interested in demosaicing that actually reduces the resolution, as doing so produces measurably far better image quality than doing it in two steps (ex. AMAZE --> Lanczos3)
  • Side note: Monochrome, black and white sensors are generally capable of higher true resolution (you don’t lose 30% to debayering), as well as better low-light performance (you’re not throwing away photons in the CFA).

The cool thing to me is that ray-tracers like Cycles essentially try to copy a lot of the quirks of physical sensors (like rolling shutter) to better integrate into real footage.

So, assuming all that, here’s my question: How does the “scene radiometry”, which seems to have a very specifically chosen metric (in the mathematical sense) tied to a color space, map to the physical concept of “this many photons hit my image sensor while exposing”?

And isn’t such a metric required, as well as accurate gamut mapping from a camera, if one should hope to match footage from different cameras, as well as computer-generated physical simulations?

It’s interesting - for my recent rendering course, we were told to do exactly this on our path traced results, “for simplicity”. And, to be fair, it looked decent enough that it was hard to argue that it wasn’t useful for that particular situation: Getting radiometric values to look decent on a monitor, quick and dirty.

Browsing the ACES IDTs, I see ex. the ISO of the camera being considered in many cases like the Alexa, when transforming encoded values to scene radiometry. Perhaps that’s the start of any “careful consideration”?

In particular, Resolve makes 30000 USD consoles with little knobs that say “RGBY” etc. on them.

Exposing luminance really seems like a holdover from the age where some people had BW televisions, and other didn’t (“if you can’t do color, throw away the UV part of YUV!”). Beyond that, Y seems much less useful beyond certain color grading operations…

Regardless. I still don’t know what “YRGB Color Science” is, and it give me stress when using Resolve…

The quickest and most direct route seems to be breaking the advised ACES protocol.

More than anything, this kind of statement seems like an opportunity for Blender. Resolve cognitively did it wrong because of industry politics. Blender doesn’t need to care about industry politics - in open source, we legitimately have the opportunity to do it correctly!

Hadn’t seen the link - thanks for a stimulating read!

For starters, cameras use bogus 3x3s to get to artificial primaries.

Does manual camera calibration fix this? Seems to me like a good ol’ 3D LUT, or even a 3x3 matrix fit of that cube LUT, would be much more precise at transforming camera gamut --> ACES gamut.

Yes please :slight_smile:

Well well :slight_smile:

Intriguing. Of course, at some point one has to integrate something, unless we want to keep track of the wavelengths of individual photons - so any code representation couldn’t be truly spectral (per my understanding, not unless somebody has some crazy probability theory up their sleeve).

  • A crazy idea: A CFA, distributed with blue noise, where each photosite’s wavelength filter is centered at a random point on the visual spectrum and covers something reasonable (tunable? also random? :rofl:) like 1/8 of the visual spectrum at a range of 1 standard deviation.
    • Perhaps useful for rendering, though? Instead of rendering 11 times for 11 primaries, for an inaccurate (and aliased) approximation, you can just use a new CFA of this type for each sample, which should eventually converge to a perfect spectral render, while your normal render is converging :slight_smile: (making spectral rendering as fast as RGB rendering, if not faster due to pure monochrome operations on paths. The tradeoff is, of course, memory. I’m sure a TMTO could exist).

But - what if one could construct a color model that integrated photons over >3 “wavelength snippets”? Everything would be more accurate, and we could reduce the prevalence of imaginary primaries.

Perhaps one could even rewrite common manipulations to work with an n-band approximation of the visual spectrum.

One could probably sleep at night advertising an 8- or 16- (or n?)-primary color model as “spectral” - and with modern vector operations, it could probably be optimized pretty well too.

Spectral ray tracers already do this, by rendering several times, each in a “narrow band”.

I wonder what demosaicing a 16-color CFA (maybe with overlapping wavelength filters to help it recover signals better?) would be like…

Hm.

For now at least, I’d argue that too many operations are too fast and too convenient in RGB with too few downsides.

It heads down into the problematic rabbit hole of RGB.

In a path tracing system, the three lights are treated as radiometric quantities of light. This is more or less a reasonable interpretation of the post-low-level-engineering camera sensor values. The only issue is to “align” the two results.

As radiometric values, RGB systems work “well enough” up to the point you begin to do things such as indirect calculations at which point they fall apart completely because the RGB are the “true” radiometric components, being spectral. They are sort of hacked cheats. As a result, the energy goes sideways badly.

It works well enough if you try it, and doubly so for cameras with a proper transfer function / gamut encoding. If you only have access to a generic DSLR without the log and gamut descriptor, Paul’s site is a wealth of goodness.

It absolutely would require a gamut mapping to get from the camera to the rendering space in an ideal sense.

Except it’s not. It’s just an arbitrary cutting at some arbitrary low value 0.0 and some arbitrary higher value 1.0. The part they missed in that is that you already were making some aesthetic judgements such as aligning the exposure to the display’s output which is part of gamut mapping if you think about it; you are aligning the scene radiometry to the output context. It’s just that you happened to skip the other two and whoever was teaching it was assuming that the scene to display should be 1:1, complete with the clipping errors and all.

On an Alexa the encoded data is disconnected from the EI; the camera simply slots all of the data into the correct slot. There is no gain applied to the sensor to stretch values, it’s always the most optimized encoding.

It tells you that they make certain assumptions about their working internal space in order to always know the Y value.

The moment you hit RGB the math is broken via the 3x3. If we consider a gamut akin to the RGB ISO definition, a camera does not have one, as it is a spectral gathering machine and frequently sees all of the spectra in varying amounts. Baking that down to a standard “triangle at base three primary model” is where the breakage happens.

A 3D LUT could in fact solve this, but deriving such a LUT is no easy feat and it certainly is subject to more errors over a mathematical fit. That’s hard to process when you see how broken “cameras as three primaries” transforms are.

Almost like you reached the conclusion simply by thinking through it. This is exactly how the functioning spectral renderers work in that they permit an arbitrary series of wavelengths; just enough to achieve the effect required for that particular surface.

The difference is they completely escape the psychophysiological artificial model of RGB / XYZ, and begin and end with more-close-to-physically-plausible spectral radiometric quantities.

Always rest assured “good ideas” have already been walked over.

http://www.gujinwei.org/research/camspec/

Three or more channels of spectral is identical, adding calculations where required. The part you might not be familiar with is the magnitude of the downside of RGB systems.

I wonder if anyone over at DevTalk might have tried to make Cycles a spectral renderer as a proof of principle…

1 Like

I think it’s time to say farewell to filmic. Filmic is like Apple and their proprietary stuff. Works well in the Apple environment but doesn’t work well with the rest of the world. Nothing outside blender works well with filmic. It’s total hell in Nuke. In Natron, it’s supposed to be supported but you have to use a grade node to compensate because there results are way off. In Resolve you can apply the filmic lut (filmic is a transform, not a color space) but then you lose all the wide color range of the original image. In the highend VFX world, we use ACES. Ok, it’s not perfect, but that’s what we use and well, it does the job. We render in ACES in Blender, we comp and color grade in ACES. Compositing in Blender is pointless. As the main article of this thread mentions, you import a plate and filmic is applied to it. That’s a show stopper right there. Anyways, Blender is not powerful enough for serious compositing. It does about 5% of what Nuke can.

1 Like

hey you might want to check this out perhaps… https://www.youtube.com/watch?v=-UjJqwwMJc8

1 Like