Cycles Spectral Rendering

Use Meng. It’s battle tested in production, which is important.

Yep. Not a great test scene but classic RGB over-darkening is clear. If you try the looped indirect colours as referenced in the other PDF, you’ll see the skew more clearly on them.

In particular @smilebags, try REC.709 light values:

 (1.00, 0.95, 0.90)
 (0.96, 0.00, 0.72)
 (0.00, 0.81, 0.78)
 (1.00, 0.00, 0.74)
 (1.00, 0.89, 0.00)

I’ll take a look at that. That’s the next big thing to tackle since the materials mostly seem to be behaving now.

Some good scenes to test would be the testing scenes the BF uses such as Fishy Cat and that Barcelona interior scene. You should also make sure to use the Filmic tonemapper so we know the full potential in terms of realism.

As for the initial test, the current state looks to be too big of a performance penalty to be worth it, but that is not a typical production scene and doesn’t include any shading features that can really take advantage of the new mode.

I will definitely try some of the sample scenes to compare with more realistic materials and objects but I don’t expect to see much more of a difference than with this test scene.

Ignore that and try the values I listed above.

Good tests shouldn’t exploit scenes not designed for spectral, just as a the original canonized material test form has unique facets which test the material such as indirect angles etc.

Test using known values that exploit the differences so that the renderer can be tested.

It’s also the fastest way to an agnostic engine given that it covers the spectral locus more or less. Some care needs to be taken into account as the edges are reached as the solver can’t solve the value.

3 Likes

I’m tentative about posting this because I don’t quite believe it, but I got almost exactly the same performance from spectral and RGB on Mike Pan. 2h 34min spectral, 2h 33min rgb. I was using the computer lightly during the RGB render which would have slowed it down, but at least it seems that the overhead seems to become less substantial with more complex scenes.

Unfortunately light primitives aren’t quite working yet so the colours are a bit off but here are the frames for visual comparison. While I’m pleased they rendered in almost identical times, I don’t think this is going to represent other people’s performance.

RGB image


Spectral image. Note the pink tint due to light primitives not being correctly configured yet.

Differences are most visible in the self-reflection on the bumper.

200% crop RGB


200% crop spectral

2 Likes

It seems like a quite good progress!! :star_struck:
Have to test your diff, to see if it works with my ‘diffraction grating’ code… I never got the wavelength colors right. :blush:

1 Like

There’s no wavelength dependent materials in this yet, so unless you write a custom diffraction grating BSDF (I’m planning on modifying a few to be able to do that) this likely won’t help with the colours :frowning:

Also anything wavelength dependent kills some optimisations, meaning it’ll automatically be much noisier for the same number of samples.

I finally thought of a possible way to maybe use color to define materials. When a color is converted to a spectrum, those are the reflectance values at each wavelength. Using the Fresnel equations at normal incidence, we can solve for a refractive index at each wavelength.

Although, I think there might be a few problems with this. The first would be unpredictable results when the solved refractive indices are used. I also don’t think we can solve for a complex refractive index without any extra information. And, like you mention, the optimizations would fall apart.

Could be really interesting to try out, though.

Interesting idea. If I understand correctly, you’re suggesting to create a 3 channel input for things like IOR to give it wavelength dependence.

It does seem like a workaround for a system that isn’t designed for it, so I think fundamentally my goal is to have a wavelength node for materials. Some things have to change under the hood but it seems possible, and the only case where it would get more noisy is where the paths diverge based on wavelength. Thin film could happen with no significant penalty, so could spectral absorption, reflection etc. The tough thing is working out an interface which allows both RGB values and spectra to be used.

That’s a nice way to generalize it. Yes, so in the case of IOR, the 3 channel input would be the color of the specular reflections at normal incidence after the conversions and calculations are completed (rgb → spectral reflectance → IOR → fresnel reflectance → rgb, or something that would work with what you think would be best).

I’ll play around soon and see if I can make some proof of concept.

Here’s the proof of concept:

It’s missing the actual RGB to spectrum conversion and vice versa, but it seems you’ll know what to do with that. In the second image, the new angle dependent reflectance spectrum is compared to the original reflectance spectrum using a color difference node (I actually recommend you try using this node to better compare/illustrate your spectral vs RGB tests!). You can see how at normal incidence, there would be no difference. And of course, there would be a whole spectrum of resulting IOR values instead of 3.

Unfortunately, I realized there’s another problem with this as I was testing different spectrum inputs. Due to the asymptotic nature of the ‘reflectance to IOR’ equation I used, as reflectance approaches 1, the IOR values approach infinity. In those cases where reflectance at normal incidence is very high (maybe > 0.20), you’d probably be describing metals/conductors instead of dielectrics; you would want to find complex IORs instead (and by extension, use the complex Fresnel equations). Like I mentioned before, we can’t really solve for a complex IOR without more information.

In the example image, the IOR at the 450 wavelength would be calculated as ~12.7, a point at which it should probably be complex.

Anyway, here’s a .blend file: IOR Proof of Concept.blend (3.0 MB)

1 Like

The problem with using Meng’s approach to recover reflectance is that it tends to be implemented using general-purpose optimization software. That can be extremely inefficient. Meng’s optimization statement is identical to my ILSS statement. To solve it, I take advantage of the special structure of the problem, and need only to solve a short sequence of small simultaneous linear equations. The difference is dramatic. I performed a time study on Meng using the SQP general-purpose solver, and my ILSS code, running each thousands of times, and Meng’s approach took over 2000 times longer to run!

Regarding “It’s battle tested in production, which is important,” this can be an impediment to progress. Back in high school, the slide rule I used was battle tested in production, but that didn’t stop me from switching to a pocket calculator once they became available (showing my age here!). Some people have tested my ILSS code with every possible integer sRGB triplet without a single failure.

1 Like

The only problem is that you can’t hard code a reference space. That violates the design of the colour management system.

As such, Meng can generate primaries on the fly, which is more compatible with OCIO.

If you can pseudocode your variant to Python, it might be more viable, preferably taking an RGB to XYZ matrix as an input for the primaries.

1 Like

Entirely do-able. But that’s a task for someone who wants to implement it for a specific software product or in a specific language, which is beyond my immediate interest. I bet there are those out there who could do it in a heartbeat, though. If someone does, please send me a PM and I’ll link to your application-specific implementation on my site!

I’d happily port it to something less nightmarish than Matlab, but the syntactical sugar is lost on me.

With that said, the performance of Meng via Python’s minimize is excellent. It takes a very small amount of time on the upsampling.

Is there a generalized approach here that could be ported to Python’s performant solvers?

1 Like

Great, thanks!
It’s probably better we continue this by email so we don’t (further) derail this thread. :flushed:

I’ve got a build working which has the following issues:
Volumes won’t be coloured
Slightly slower than regular Cycles

If anyone wants to give it a try, I’ll see if I can put it online somewhere.

I’m working on getting a ‘wavelength’ node into the material editor, but I’ve still got a lot to learn to be able to do that.

3 Likes

Great news! I’m quite interested to see the overall difference in production scenes

My expectation: barely noticeable

The reason being, without the engine having access to spectrally defined reflectances and emission, there is very little difference between the RGB and spectral results because the generated spectra are very smooth. With access to spectral light and reflectances, my guess is the result could be considerably different to the RGB.