Cycles Spectral Rendering

Bzzt! It isn’t greyscale either. It’s merely intensities of spectral lights.

Blender will handle your data just fine if you understand a bit about encoding and know how display referred format encodes mangle the data in certain ways. Are there some goofy design choices? Oh yes. Can they be negotiated? Yes.

Stick to EXRs. They are, by convention, a linearized encoding format in float. HDRs are quite inefficient.

For a given material, set the FCurve value to zero, which may speed up / optimize the result. For more spectral / iridescent / dispersion effects, use more. You could combine textures in the same scene, using whatever increments required.

Anyways, all of this is moot until you verify your calculations. Metameric wheels or known results would be the best entry point. Cycles more than likely will have a few hard coded pitfalls to negotiate around. Until you output to multilayer EXR and do the calculations within Blender itself, it is impossible to know how accurate your results are.

Good gosh do you get lost. If you take an MP3, can you return it to lossless? Good. So we agree that upsampling is a hack? Good. Move along.

1 Like

You are right. I was thinking back to when I was utilising blender’s Wavelength colour and multiplying each frame by the corresponding colour. That was nowhere near correct, hence the change in direction (this all happened before this thread).

I would love to be using EXRs. I am aware HDRs are inefficient, but currently I have no way to work with the EXRs. Once I either decide on a new tool to do the spectrum to image step, I’ll certainly switch back.

Yep.

The desire to create fake spectral data from RGB is simply a convenience. Ideally, I would hand-craft every material, defining exactly how it should interact with each wavelength, but in order to actually get things done, creating fake spectra from RGB alllows me to use images for textures, allows me to use the colour picker, etc.

My goal isn’t to make a scientific tool which 1% of blender users understand how to use, but to create a system where you can use Cycles mostly how it was intended to be used, and have the ability to then create materials which behave more accurately, and get nice results from it. Of course any sort of RGB=>spectrum is rubbish, but so is having to create colour ramp nodes and guess the spectrum of the material you’re trying to create. It is a tradeoff, which is suitable in some cases, and not in others.

1 Like

So, we should only use “real” reflectance curves or not even bother and just use RGB. Or, something else?

Use Blender.

@troy_s I understand it’s a “hack”, but is there any middle ground involving “real” reflectances?
This paper describes using 1400 actual paint samples to interpolate new samples:
https://pdfs.semanticscholar.org/4152/24ba71ed172929d975600761fa6dbe60d72d.pdf

The authors suggest you might build many libraries for various materials (dyes, plastics, cosmetics, etc). Wouldn’t the ideal rendering use this type of spectral data? I’m thinking the biggest argument against this type of thing is just that it is too much work, and time would be better spent on some other (massively) deficient aspect to the workflow or software. . . I admit that’s a hard argument to dismiss. Although, I might compare your criticisms to an adult berating children for playing in the sandbox instead of studying. That was supposed to be a metaphor. . . :slight_smile:

@smilebags are you going to post a render or what? :smiley:

I’m thinking machine learning could come up with an interesting solution to this. That’s a story for another day though.

I’ve been tight for time lately, but I’ll see if I can put something together!

1 Like

My thinking is that it is a hack, and as such, yes there will be some suboptimal results. I am willing to accept that if, say, the Hero Wavelength approach covers a large number of cases where I am not building 1400 paint samples. I’m also not super keen on focusing on real-world materials as it would head us back into the limitations of gamuts.

I prefer to not spiral out of control and have a solution or two that are useful and practical that save time, while also delivering decent results without going too bonkers.

2 Likes

I think for the case of wanting to use colour data in a spectral render, the only option is to upsample the RGB data into some sort of ‘sensible’ spectrum. This could be solved multiple ways: a machine learning model could be trained to create believable spectra for a particular colour (this might be somewhat unpredictable), or you could use one of the approaches we’ve discussed above. While it isn’t perfect (none of them are), creating spectra for sRGB primaries and adding the components together ensures that under D65 illumination, the colour in the colour picker or image, is the same colour as it is in the output.

Hero Wavelength is about how to sample the scene, rather than to produce the spectra from RGB data. This is something that would have to be implemented in Cycles if it was converted to a spectral renderer. Meng seems to be the slowest but most accurate approach to synthesize a spectrum from RGB, and can deal with almost all colours, but I haven’t attempted to make it in Blender yet.

My plan is to make the ‘generation of spectra’ hidden behind node groups for now (and later native nodes if I decide to develop this further) so that one can simply use colours as they do now. I would also like to create a ‘D Illuminant’ node which has a temperature and luminosity, since I believe there are functions out there which can approximate the D illuminant spectrum at any given colour temperature. This would give the spectral equivalent of the current “blackbody” node.

1 Like

What if some maniacs created a crowd-sourced spectral reflectance database? SpectralHub.com isn’t taken yet. Heck, this person collected 43+ million samples from seven public databases:
http://capbone.com/spectral-reflectance-database/

We just need someone else to package that all up into a library to use! Simple! /s

Even using the reflectances from Rec 709 primaries wouldn’t limit your gamut at all since you can still shine any kind of light on the objects. But even so, if we could extrapolate from real reflectances it should make it much easier to create “surreal” reflectances, I would think. To my mind using plausible reflectances falls in the same category as BSDFs or anything else to make things more believable.

2 Likes

Why D? E makes far more sense, no?

From what I’ve gathered it really doesn’t matter. Maybe flat would be slightly more accurate? Well, so long as your illuminant has SOME amount of every wavelength, enough to measure a reflectance off an object. For synthesis, E makes more sense just for simplicity. But if you take a sample from “real life” you could use any broad-spectrum light source as long as you know the profile of it and keep it associated with the sample long enough to calculate the reflectance curve. After that you can “forget” the light source and just upload the reflectance curve to the spectralhub.

The reason for creating a D illuminant node is for the effect. A dispersion spectrum will look slightly different under D and E light. I also think (though I’m not certain) that for colours coming from RGB, they will only give the same final output if the spectrum illuminating them is the same as what they were originally measured with, no?

E is also useful, but luckily that’s just a constant value, so there’s no need to create a special node for it.

TestRender
I really need a more powerful computer in order to do something nice with this, but here is a scene I built earlier, which I converted to spectral in 15 minutes and rendered in about 20 minutes. 36 bins of 32 samples each, equivalent to 1152 samples, but some bins contribute very little light to the scene. Optimising the number of samples in each bins would help reduce noise without increasing render time.

TestRender3
It seems the colour management is working correctly, now I just need a computer that can work a bit harder than this one… The hue of the colours looks correct, but I haven’t yet verified that.

4 Likes

Looks promising, but I’m certainly not qualified to say that bubble interference pattern looks “correct”. It would be awesome if you could let that run for a whole day or even produce a side-by-side comparison showing w/ and without spectral rendering.

It is pretty hard to actually see, but this was a glass ball rather than a bubble. I’m working on a thin film shader now.

1 Like

5

I think the images still have a blue tint, and the inside of the glossy component is acting up on the backfaces, but here’s my attempt at a bubble with thickness between about 50 and 500nm.

5 Likes

That looks pretty cool! Maybe it’s odd to have a bubble without any environment around it? Now I am wondering, since “D65” is supposed to be like the light from the Northern (blue) sky. . .it “should” be blue, right? It certainly looks like a bubble that might be outdoors, to me.

No, I feel my photoshop ProPhoto D65 profile just isn’t working correctly, since it looks identical as the standard D50 one. I might try making a D50 light source and seeing if I get a neutral grey then.

1 Like

Being able to identify what looks right is really 90% of the problem. @troy_s taught me that one, among many other things :slight_smile:

Yep! Though proper validation is still very important, since our eyes can easily fool us.