Strange coloring of blackbodies with new AGX color management

In my estimation, that Dress photo is exactly why pictures are unique. In that specific case, the reason there is a “debate” at all is that from the neurophysiological signal vantage, the spatiotemporal articulation of the values are approaching a “null”; an ambiguous marker threshold in the visual cognition signals.

There are countless examples of this sort of “ambiguity” mechanic, and The Dress carries a lesson about precisely what colour is. That is, we do not see colour. Colour is quite literally cognition; it’s the mechanics of us creating meaning. In this specific case, the picture values are resting near a threshold, and our cognition can modulate between possible meaning. Some folks are less inclined to modulate than others!

This picture from Peter Tse’s work1 is a solid demonstration of this modulation. Our cognition will modulate between which “square” is “darker”. It’s almost like the modulation between possibilities is part of our cognitive mechanics to help us create meaning.

I’ll leave folks with this passage from Bartleson’s research on 1962, while he was working at Kodak:

Apparently , it is necessary to consider colors and their reproductions only with respect to the frames of reference in which they are perceived in order to draw any useful inferences about reproduction qualities.2

He sadly says “colour” where he probably should have used the term “stimulus”, but I imagine it reads rather clearly.


1 Tse, Peter U. “Voluntary Attention Modulates the Brightness of Overlapping Transparent Surfaces.” Vision Research 45, no. 9 (April 2005): 1095–98. https://doi.org/10.1016/j.visres.2004.11.001.

2 Bartleson, C. J., and C. P. Bray. “On the Preferred Reproduction of Flesh, Blue-Sky, and Green-Grass Colors.” Photographic Science and Engineering 6, no. 1 (February 1962): 19–25.

6 Likes

Bring it into a compositor and play around with it. I see absolutely no reason to re-render the image again and again and trying to get it right in Blender if you can adjust everything in post in realtime.
No Cg fire or explosion you’ll ever see in a movie or TV show is a raw render, pyro stuff always needs some final touches in comp.

2 Likes

I’ve been looking at various fire / explosion examples recorded on film and I’m seeing a distinctly yellow appearance on some. But it’s certainly not the extraordinarily yellow fire/explosions we’ve gotten with digital cameras or in animated movies.

It’s the same story with some nuclear test footage; although other film examples, like your Return of the Jedi example, along with this, seem to take a duller appearance.

Having spent around a month messing around with explosions in AgX I’ve ended up with a somewhat decent node setup, but it needs the FFX Pyro addon to work correctly. Interestingly, I could get the yellows looking somewhat good, but it was the deep reds that I spent most of my time hunting down.

1 Like

Chemical creative colour film most certainly has this swing built into the mechanic.

But never forget: This is a digital scan. We can’t escape the distortions from the film to the digital encoding, and that will ultimately be a very different cognition to viewing a print of the original source film.

Can’t escape the scourge of digital.

2 Likes

I need to disagree here.

I guess that throw some visual paradoxes and speaking about Gestalt and silver nitrate doesn’t solve the issue.

Why people stick shooting with 5D mark II from 2008 instead using colorscience from Sony (widely improved with CineTone) although they have better cameras? It was for the yellowish and the greenish of the skin tones. Or stick with ARRI instead of some more affordable RED? It was because of the smooth highlights roll-off.

The guys behind this companies knows that, and beside technical limitations, the made their cameras give the best aesthetics the could. You don’t want shooting some rock concert and having everything dull as AgX does.

In top of that you can have some real scenarios where AgX science won’t work or can let you to trick as we do in reality. For example I’m not being able to replicate RGB LED strips in AgX, as well some high power colored lasers, that keep the beam saturated in human vision. Or in explosions, cinema stunts might use gasoline, instead high-fast explosive, anyways gasoline is way slower and low-burning temperature but gives the high oranges and darken smoke combination that AgX seems incapable to replicate.

Or speaking about CG, why the big win of Corona? Because the looks you got without much further adjustment.

For me, AgX is failing, beside exhibit great numbers in exposure latitude or have solve some scientific managing color troubles (as hue shifting and six color problem). It is not easy to use and understand. In my opinion I’m not here to teach everyone to love this dull-magenta fire with papers or perception experiments. I don’t wont to learn Nuke or DaVinci to adjust every shoot and learn to keep consistency between shoots and don’t destroy other primaries in the adjustment process.

For sure we should banish realism from equation, but, what about being visually pleasant? what about being user and artist friendly? And don’t wanna talk about the lack of tools and performance in the Blender compositor…

AgX for me its a huge leap, at the same time I think it put some noise in the pipeline (ACES was a dream come true) and really need to be reworked on some areas with this topics in mind.

2 Likes

Read what I’ve written prior. You will see that many folks make the a priori assumption that a picture is a simulacrum of “standing there”. This appears to not be the case.

So if we set aside what we think we cognize when looking at stimulus for a moment, we arrive at a conjecture that I strongly adhere to based on evidence and research: The stimulus present in a picture deviates rather significantly from the stimulus present when standing there.

But let’s set that aside for a moment. For anyone with a shred of experience forming pictures, they realize quickly that if one holds on to too much purity, the formed pictures fall apart and break. In fact, this trend is so overtly glaring that casual audience members will spot it. So despite the desire to arbitrarily increase purity, there’s some other poorly understood mechanic at work. In addition to this, there are other mechanics that seem to force certain axial chromaticities to be presented in a picture as rather different to the “as measured” stimulus. Classic examples are fires, where folks will swear on their death bed that a fire is cognized as a certain “yellow”, despite the “as measured” values being cognized as “more reddish”. How or why or what is going on here remains poorly understood, poorly researched, and very poorly known.

I didn’t “make” the AgX in Blender, and if you read the threads, you’ll see why I suggested that the slew along “red” stimulus should be stronger toward “yellow”. This is the exact thing I outlined above. No one understands why this cognitive mechanic seems to be more or less consistent when reading pictures, but it cannot be overlooked.

I challenge anyone with a strongly held belief in how pictures should be formed to present their concept. It will break in a matter of moments.

However, and again I want to stress that I agree with the idea that there should be a stronger swing of the “red” region of stimulus toward “yellow”. One has to remember that feedback from within the community, and the individuals responsible for implementation, also have a say in how the result manifests in Blender.

Balancing the degree of “swing to yellow” against “My running shoe product is ‘red’” is a Goldilocks attempt for a default. Folks must ultimately grade their work to their needs. The role of a default picture formation is to afford them the ability to at least get to an acceptable picture that will not break by default.

I would not call Corona a “big win”, and I’ve seen plenty of breakage across Corona formed pictures. This is not exactly a baseline.

3 Likes

I tend to make lots of explosions in Blender, and I’ve encountered this same issue
For now though, I’m just sticking with Filmic on my explosion renders- AgX on everything else though hehe

Regarding I agree with the underlying analysis I need to elevate the discussion to something more tangible. Elevate from roots to screens. We can keep arguing about why everyone could say that trees are brown, when they are grey, or even about brown itself being a color, it isn’t primary or complementary; we might end in some sort of endless debate about epistemology and black hole color theory.

As I said, I agree with that and can keep opening new corners for debate but, why not just keep in there what you was talking? why do not stop in perception? why do not limit, as some big camera brands I mentioned, in deliver some pleasant colors that are alive in our construction of reality? Said in other way, why fight against?

I’m thinking here in a competitive market mind. Why do SideFx have Pyro_Shader? Or Pixar launched Kaboom Box? Because its easy to use, understandable and gives us good looking explosions. I’m not lazy, love my work, for that reason I learned the value of having good looking renders in the short amount of time. And yes, that what small studios and freelancers want’s, that’s who make the base of Blender users. I love the approach of giving us more room in post, but this should not work against having good images without retouching.

I guess I explained myself wrong here. I was talking about my two main concerns about AgX: "what about being visually pleasant? what about being user and artist friendly? ".
Sure Corona has issues and can’t be take as baseline, but the reason I bring Corona here is because it’s a highly success new engine that based his strength on delivering a good looking render out-of-the-box (that’s why Chaos bought them). And that’s my point and my opinion about a good color science, it has to be first pleasant and easy to use.

3 Likes

To be clear, I started this whole spelunking expedition simply trying to make digital cameras and renders form pictures that didn’t stink. Selfish! I think we have a massive ways to go in terms of our understanding and research.

All that matters to me is that the pictures formed don’t stink. But to really make that tangible, we need to delve into “What stinks” versus “What stinks less”, and while that seems “simple”, it cascades into cognition and deeper subjects. I don’t think any vendor of digital cameras has “solved” this yet, as most folks who seem to grow interested in the field of picture forming would probably agree. I want to see more thought and research on the subject, not less! I want more folks to feel empowered to explore the subject, without a fear of the imposter syndrome. Doubly so for the craftsfolk who make pictures.

I 100000% agree with this sentiment. I don’t want folks getting belaboured into a quagmire of subjects. My personal desire is that the picture formation:

  1. Holds up in all cases, no exceptions, no side rules, no excuses. That is, that the picture isn’t “broken” by default - subject to the slippery definition of “broken”.
  2. Work shorter term as a bridge with the spectral rendering potentially incoming.

With that said, and with the general idea that Blender has a “default” picture formation, the default needs to walk a fine line between different audiences. A product shot versus a punchy CGI short film etc. all have different picture formation needs, and in the future, I’d love to see a world where there is a frictionless selection of options afforded to the authors. We aren’t there yet, and some protocols are in fact eroding this vision.

I can’t speak to the specific picture formation present in Blender, as again, I wasn’t the author nor collective responsible for implementation. What I can say is that the AgX mechanic experiments I’ve personally been around appear to have gained some traction if the adoption and interest is any sign.

I would suggest “Make thing - render” is about as audience friendly as it gets. Not really sure where your belief that a “Set it and go” setup as present in Blender isn’t “friendly”, but those sorts of design terms infuriate me to no end, as they are fictional perspectivist constructs frequently.

So given the wealth of things I’ve seen rendered with Blender more recently, and the distinct lack of gripes (with the exception being some nuanced discussions about specific “hues”), I see a low degree of substance to the claim. Crawl YouTube etc., and you will see a disproportionate number of authors who seem rather intrigued and excited about the new picture formation?

And again, I take no credit for the picture formation in Blender, it was all the work of a group of folks including the person who did the scripts - Eary. It’s a challenging task to appease literally millions of people overnight, and as someone who has been on the battleground, it can be even harder trying to get folks to understand why things are done the way they are done to aid them.

I’ve seen the existing Blender work tested across literally thousands of pictures and new renders. I see little substance to this anecdotal claim.

Again, I’ve seen enough pictures formed from Corona to know it breaks badly. It would be utterly disingenuous to suggest they have discovered something that someone around picture formation isn’t familiar with.

2 Likes

It is likely an easy grade adjustment. I’d suggest that the discussion should be taken upstream however, as perhaps an optional choice for picture formation could be presented etc.

There are a sizeable number of folks who use Blender defaults for product shot renders, and in that case, having less of a swing is sometimes potentially desirable.

I would be confident saying that the AgX-like formation in Blender will likely deliver qualitatively more well formed pictures than Filmic in all cases. The issue of the creative choice for fires and explosions is worth investing a little time in for a grade. Should be trivial to slew the hue to yield more yellow explosions in formed pictures. I believe @pixelgrip outlined one possible option, and that should be trivial to can into a look for use within the OpenColorIO structure.

1 Like

You mean the simple hue rotation in compositing?Sure you can do that.
About the explosion and fire colors.I think @kram10321 had made some renderings with blackbody months ago.Iirc there was something unclear about the blackbody node and its intensity.Maybe it can be made a new nodesetup for a more precise blackbody rendering to get render results for better tweaking AgX and grading.
You can do with real footage of fire ofc too.A accurate rendering of blackbody whould be nice for everything whats blackbody related including fire and explosions.

2 Likes

Yes but
A) I’m pretty bad at color grading lol
B) The black body node should, in theory, give correct color grading by itself

I think you are probably talking about this?

in this take I reworked the Mr Elephant test scene to work with Cycles (it’s designed for EEVEE), and AgX. Every single light source here is a blackbody emitter.

For the fire in particular, I used the actual blackbody laws, making the intensity increase with the fourth power of the temperature, and as temperature I chose a few values I found on the internet (they were allll over the place though. Like, with factor 10 differences between different sources. So I wouldn’t necessarily say these colors I ended up with represent a “real” fireplace fire. The fire in this scene is just a simple image texture plane where I took the green channel to remap that to intensity and temperature accordingly. Very simplistic. But for what it is, I think it looks pretty decent.
(There is no bloom btw: It’s all actual volumetrics. took forever to render)

I think looking at it again in soluation now it looks pretty red:

something like this might also work:

And for good measure, the actually used texture (note, this is several stops higher because the actual texture is SDR and would look really muddy and dark - here I just bypassed the node tree and put the texture directly into the surface shader output)

It honestly looks much worse to me than using the green channel of that texture as a proxy for overall temperature. Actual fire just simply does not look like this. Even if this is from a digital photo of a fire.

I mean, I guess it depends on how you use it and also for what specific kind of fire you use it. If you’re gonna do, say, various salt fires like strontium red or barium green or what have you (a few of the classic firework colors) then obviously blackbody isn’t gonna do the job for one. Most actual wood or carbon oil flames have a blue component as well, which is gonna be from hydrogen: (pay attention to the bottom of the flames here - also note the lack of proper yellow. You basically only have blue, orange, some very dark red, and white)

If you’re really going for a “realistic” flame (and I’m using “realistic” in a really broad, vague, arguably meaningless way here; see what Troy said on that term), that’s actually quite complex, because you are looking at a pretty violent and complicated series of chemical reactions that happen in different parts of the flame which also have different temperatures.
The classic orange glow, however, is specifically microscopic carbon particles (soot) that are superheated into the incandescent regime (i.e. very nearly emitting blackbody radiation, so in that sense, sure, it makes sense to use the blackbody node to get your flame colors)

The best way, tbh, is whatever works for you? It’s gonna be an approximation no matter what. especially since flames are at the same time fairly high saturation and yet rather wide, “flat”/smooth spectra. They are pretty ill-captured by 3-channel RGB renderers. (This only really matters for bounce light though. The direct light visuals from the emitter to the camera could in principle be reproduced fine)

With spectral rendering, the blackbody node certainly is going to matter much more for this sort of thing because then you actually get the right bounce light (so long as the thing you are looking at is indeed an incandescent light source)


FWIW anecdotally, the other day I drove past a chemical plant where they burnt off the excess methane in one of the chimneys (this is standard procedure as methane is considered even worse than the instead resulting CO2) and the flame definitely was a very bright orange and nowhere at all yellow. That was in broad daylight. I don’t think I’ve ever seen a real incandescent flame irl that actually really appeared like yellow. And the reason is basically this:

by the time an incandescent color would be properly yellow, it basically starts curving down again, going through “white” in the sense of being pretty close to illuminant E (which your screen, given it’s set to 6500K whitepoint, would most likely render as a light, slightly pinkish orange) and after than towards blue. It simply never really reaches up into the yellows in the first place. The best you can do is a kind of gold.

image

No yellow in sight! (Note how it puts 6500K – 6227°C or 11240°F – as “pure white” - this graphic is calibrated for your typical screen. Also note that on the far right you basically get sky blue. This is only partially an accident: It just so happens the spectrum of the sky is produced by a scattering process that has the same sort of spectral profile as the high-temperature-limit of blackbody radiation. Completely different physical process, but same idealized shape)

You may also find this weird depiction:

But that one’s just plain wrong. Every part of that is nonsense and I have no idea how they even generated this one. My guess is poorly eyeballing with gradients in photoshop or something.

I also rendered out this in the old spectral branch and put it through Blender’s AgX:

At the far left edge, you get illuminant E white, then by the end of the first grey bar from the left, you get “the actual*” blackbody color for a range of temperatures, and as you go even further to the right, it’s becoming more saturated simply by multiplying the resulting spectrum by itself. By the far right, that’s what it looks like when the spectrum of each black body temperature in this range gets multiplied with itself 10 times. It’s very nearly spectral by then. Essentially you can see the rainbow of spectral colors with whatever the dominant wavelength of a given color temperature is.

I didn’t save out an exr version of this but there also is this:

I believe that was amde with Filmic. But in that version I had had annotated a few color temperatures for orientation.

  • 6500K is what your screen deems “white”. It’s dominantly violet.
  • 5777K is roughly the surface temperature of the sun. When you hear the sun is supposedly a “green” star, this is what people mean by that: For 5777K, the dominant color is just about on the green side of the boundary towards the teals/turquoises. However, really, it’s nonsense to call the sun “green” because the spectrum simply appears white to us. There is no part of the blackbody spectrum that looks greenish.
  • 4000K was chosen simply as some incandescent representative.

Two things you can note on this diagram in terms of color:

  • in the yellow and green part of this, it takes significantly longer than anywhere else to turn from “white” to “saturated”. There clearly is a kink there, right? I think that may partially have to do with how yellow is a pretty “weak” color in a sense? I.e. it’s almost “all luminance” and almost “none chrominance”. Yellow and green appear bright to us whereas blue and violet typically are deemed dark.
    The other reason for this is, that in that region the blackbody spectrum just lines up particularly well with the spectral sensitivities of our eyes, giving a “more white impression” than the more colorful very cool or very hot blackbody colors. (Note: This is very wishy-washy. For one, when I phrase it like this, I’m clearly completely neglecting how our eyes adapt to whitepoints. I’m not sure how best to describe it “accurately” though.)

The other interesting phenomenon which I was wondering about for a good while is the clearly darker dip between green and blue. The “emerald” zone if you will. It’s already darker in Filmic but in AgX it’s pretty extreme. And in ARRI the same thing happens.
I think what’s happening there is the phenomenon that our Long and Middle (LM) cones (aka protan and deutan) are pretty close together in terms of their spectral sensitivities, whereas our Short (S, tritan) cone has its peak sensitivity quite a bit further towards shorter wavelengths. So I think that gap is what we see in this darker region. Effectively uneven spectral coverage. That said I haven’t yet been able to confirm this effect in actual spectra observed irl with my own eyes.
Should also note this is using the old and known to be quite inaccurate color matching functions from 1931 because, unfortunately, sRGB is defined in terms of those, and transferring the sRGB standard from those to a more modern take is always gonna be approximate with a wide variety of tradeoffs. I think that mostly affects the blue/violet side of things though. Not sure in how far changing this would affect the two features I pointed out above.

(* no such thing)

But anyways, tl;dr: Authorial intent is ultimately king, or said differently,

The Dress is really funny and weird for me. These days, I can not for the life of me see it as anything other than black and blue. The illusion forever shattered by “knowing” “the” “truth”.
When it was new, though, I was in a minority group. I argued it wasn’t white and gold, nor (dark) blue and black, but rather (light) blue and gold.

It still looks like a much lighter blue to me than the eventually revealed actual dress ended up having when photographed in an actually sensible way. But the black parts I can no longer read as gold at all.

Seems to work just fine though?

That particular part is a difficult tradeoff imo. I think it landed in a fine spot in the end. Most complaints seem to be kinda the opposite even with the shift that’s in there. Most people who have complaints already complain it’s “not red enough” as is. Kinda can’t have it both ways to work “perfectly” for flames (i.e. stronger swing towards yellow) and more closely maintain RGB red (which is not really as valuable a goal imo, but either way, the degree to which this version of AgX isn’t doing that already gets so many complaints)

I’d really love it if it were possible to have a parametrized AgX where you can continuously adjust stuff like the Abney shifts and the knee and what not. As far as I’m concerned it’s really a whole family of possible view transforms, where we kinda arbitrarily had to pick just one when a pretty wide range of values might have been decent, and some might work better for some applications than others.

Like, anything that’s principled and based on color science should be built in accordingly. That’s basically the basic structure of AgX. The in- and outset and the curve, right?
And then the specifics – how large are the in- and outset? How much should each primary be rotated? How aggressive is the sigmoid? – ought to really be parameters. With sensible defaults or maybe a handful of nice presets. But adjustable to the artist’s needs.

Kinda wish we’d have gotten all this feedback when it was still being debated.

Sadly this simply is not possible at all within the OCIO framework. It’d probably also be too slow in practice, although I can imagine a variant where you could, like “bake an OCIO profile” from your parameters. Basically amounts to running the python script that was used to generate the 3D LUT for the current version in the first place to save out a complete config based on the settings you chose and then using that going forward.

But oh well, future features perhaps…

6 Likes

I saw some blackbody equations for Wm² for a given Area or Ball surface.I think its worth a try.We would get the exact color and intensity then.For AgX and all tonemapper the problem are, at which intensity the curve for gamut compression was developed.That means that the fire with correct shader and intensitys can appear to white or the opposite.

Yes, but the most users dont have loaded AgX before it came into the Master.

I wonder if you are able to make changes to Earys files and the scripts to make some custom AgX versions for further testings and improvements.?

With increasing Kelvin your frequency is increased at the same time,that the intensity curve from red to blue wavelengths emitting more of the whole VIS spectrum at the same time,with the peak is shifting towards the blue then above 6500k.

like in this 2nd graph

The law is already correct. The issue is that I found neither reliable W/m² measurements nor reliable temperature measurements of a bonfire-style fire. The values were all over the place. Like, allll over the place. Some sites claimed as low as mid-hundreds of kelvins for such flames, which I’m fairly sure can’t be true. That’s just waaaaay too red, if it’s even visible at all. Others claimed low 1000s. And a bunch were in between. The fact that I visited like five different sites and got five different answers suggests to me there is a ton of confusion, inaccuracies, and false information out there. I suspect what happened boils down to in what way, precisely stuff got measured. Temperature of flames vs air temperature vs temperature of the wood where it’s already red hot or orange hot, fire temperature close to the wood, temperatue above a flame, temperature within a flame…

in reality it’s a really complicated gradient in terms of actual temperatures, and also in terms of how much visual influence various elements have.
In particular, the super heated soot that we actually see basically is opaque (like, the base color of that is black – literally soot black – and then there is glow on top of that) but some of the surrounding air (especially above a flame) is often quite a bit hotter than that soot and yet still invisible / transparent (because there is no soot there).

You can test out that difference within Cycles too: If you use a volumetric emitter (and no absorption or scattering) it actually makes for a pretty poor light source even at fairly high intensities compared to a(n opaque) plane that acts as an emitter. The air presumably does glow in the visible range – anything hot enough ought to emit blackbody radiation, even if it’s transparent. Certainly superheated glas glows

but the effect is much weakened compared to when something opaque and, in reflectance terms, dark (i.e. soot) glows, where basically all you see is that glow, and nearly no additional light comes from anywhere else.

Sure. Anybody can. You are welcome to try! The python file to generate it all is completely open source and available on Eary’s Github. Honestly, it was quite a struggle to get this far though. So so so much back and forth. Frankly exhausting. I’m quite happy with the product we ended up with.

Sure, and as you go towards infinity, all wavelengths are being emitted. But there is still a curve to it. One side of the blackbody radiation curve is quartic (i.e. it increases like a polynomial to the fourth power) and the other side is exponential (it decreases like an exponential curve on the high end)
In the limit at infinite temperature, it basically looks like x⁴ - which happens to also be the same falloff that Rayleigh scattering gives, which is the type of scattering that we see at noon on a clear, dry day at the equator in the sky. There is also Mie scattering which is what gives clouds their white lack of color, and also general haze from high humidity and smog and what not.

Either way, the blackbody spectrum peaks:
image

The yellow curve here is the general shape of the blackbody curve. That’s the exact right shape, but it’d peak at a different wavelength (x-axis) and with a different intensity (y-axis) depending on the temperature. I normalized it here such that that peak happens at (1, 1).
All the curves below it are if you take that shape to a higher and higher power. You can see how the spectrum becomes narrower and narrower. More and more spectral. Eventually, only a single wavelength will be left over.

This is the same thing but this time normalized such that the area under the original (yellow) curve is 1

image

you can see how it gets darker each time.
In the spectral blackbody plot above, I compensated for this by also making the image exponentially brighter the further to the right it went, roughly counteracting this loss in brightness.

And here is a “continuous” version where you can see this narrowing in action. At y=0 you have the spectrum of Illuminat E, at y=1 it’s the regular blackbody spectrum, and above that it gets narrowed ever more to the dominant wavelength.
image

At any rate, the bottom line is, it’s that wavelength right at the peak that eventually is left over, and it’s that wavelength that you end up seeing towards the right side of the image. Or rather, that’s what the data in the exr land on. What you see is, of course, filtered through Filmic and AgX.

1 Like

Yes AgX is not bad,but not perfect yet and much room for improvements.Are the python scrips running in Blenders python or a standalone python for building the files?

btw maybe blackbody of known heated metals are more constant and measured somewhere.

here some furnace temps of molten metal

I did not imagine this post would create such discussion, but I’m honored to have all these members of the Blender community discuss color theory and blackbody here!

As an update, I finished the render a few weeks ago and here’s a still from a near-final animation.

I think the main problem I was dealing with was simply adapting to AGX, as my workflow definitely favored Filmic. The original fire material was designed for the yellow-y fire that filmic tends to produce, and therefore looked horrible when I transferred over to AGX. As personal preference, I added a bit of that yellow back into the glow of the comp in After Effects.

I found that once I re-calibrated my mind to think and work in AGX, it made fire materials a lot easier to work with overall. I like the more monochromatic look that it gave me, though I understand this may not be the look everyone wants to go for. Thanks everyone for the help, and the degree in Color Science lol.

All jokes aside, it was really interesting to learn more about color science and the technical side that goes into making art like this possible in Blender.

10 Likes

There is nothing wrong useing yellow.If the blackbody temp increase then it typical starts to be visible at the longer red wavelengths around 700nm.With increasing freq the wavelength peak is shifting towards the shorter wavelengths and has to pass the 578nm which is what?Yellow.

See above; the ”colour” that folks expect to see in pictures is different to the “as measured” stimulus of fire. There are a whole slew of details like this in pictures, but sadly are terribly understood.

Granted the Colourist toolset in Blender is less than acceptable, it is still worth practicing!

The stimulus isn’t the colour, because all of colour is a cognition.

The real issue here is that folks seem to think that an “ideal” picture is a perfect “replication” of whatever it is that we call “reality”. In truth, the visual cognitive experience modulates and shifts based on whatever tasks are before us.

Reading a picture is a very different act to some other tasks, leaning heavily on authorial intention, including overall creative choices and how the author is attempting to communicate something. Fire is one of those absolutely wacky things that seems to rest in the fulcrum of this departure from generic cognition tasks like propelling our body to the bathroom. It seems memory encodings have a tremendous sway here, as little as we understand about how we encode a memory of things.

The easiest way to fully appreciate the wildly radical departure from more mundane tasks of ecological perception is to take a basic laser pointer. Notice how no matter how “bright” nor how “pure” it is, in generic ecological perception, the purity never attenuates to white. Yet folks see Star Wars and never even blink an eye with the highly pure light sabers have a white core. In fact, quite the contrary; if purity doesn’t attenuate in a picture, the picture falls apart in a dozen different ways.

And that is merely the tip of a gargantuan iceberg.

I cannot stress enough that we simply don’t understand how pictures work to really begin to discuss what the “correct” general approaches are!

Fire is just yet another oddity in the picture domain, among hundreds.

4 Likes

Thanks to the analog Film we have at least a kind of solution to desaturate overexposed colors in digital images.

But yes in theory in realworld there is no desaturation,except for scattering and absorption effects in a volume.This is interesting to think about,since analog Film works as substracting absorption filter and the filtered colors gets scattered/radiated on the canvas from the projector bulb.