Will cycles get dispersion effects?

While I agree that its really a feature that isn’t super useful, I hardly ever use it at work using Arnold which has pretty fast dispersion, is it so that it “must” be a spectral renderer? I did not think either Renderman or Arnold were. Or does the transmission shader here interperite RGB rays as spectra? Or are these approaches not pbr?

Mental Ray 10 years ago was able to do dispersion (same with the original Yafray). There was even a patch once that would’ve allowed BI to do dispersion, but it by no means indicates that those engines could do dispersion as accurately or with the same quality as a spectral renderer (for instance, RGB renders might simulate a finite number of colors and combine them for dispersion effects while engines like Luxrender sample for any place in the spectrum).

RGB renderers can simulate dispersion by having slightly different IORs for red, green, and blue. I believe that this is what Arnold is doing (along with their usual special sauce in the form of importance sampling wizardry). But to get true dispersion you need spectral sampling, and to get true dispersion in any reasonable amount of time you need MLT, bidirectional tracing, or some combination of the two, which instantly takes you out of the field of production-ready renderers.

RenderMan can make a spectral render when using physical materials, for example, when defining a metal and using it as physically accurate, you define the absorption color, rather than the reflected color. So it is spectral, and Pixar’s documentation mentions it several times. Spectral rendering can define the dispersion of the colors based on IOR and wavelength of light, where RGB is kind of a fake.

Might be relevant to the discussion:
https://twitter.com/Dade916/status/949006511182884866

You can achieve dispersion a number of ways in cycles. The first and easiest method is to add three glass shaders. Each glass shader is set to either R, G or B - and the IOR is set differently for each one. This method is quite slow because you are using multiple glass shaders.

The second method is to add a procedural texture - then drive the RGB and IOR values through it. If you make the texture size small enough - you’ll get the dispersion effect without actually seeing the texture. I also attempted to make it ‘kinda spectral’ by converting the texture greyscale values into wavelengths in order to get a better colour continuity. It also means the IOR is directly correlated with wavelength too.

Because you only use a single glass shader, not three - it works out about 30-50% faster to render too


3 Likes

That is bloody clever haha thanks Moony !

Dispersion does not need a renderer to be spectral one in order to work. Both V-Ray and Corona have dispersion, yet neither of them is a spectral renderer, so it’s a false assumption. Dispersion effect is actually quite simple. When you hit a surface, instead of one refraction ray, you shoot 3 - red, green and a blue one with very slight difference in IOR angle, based off the Abbe number. It’s similar to doing glossy refraction, where you also shoot more than one refraction ray.

So no, no need to fundamentally change how Cycles works. All it really takes is about 1 week of one skilled developer’s time. However, dispersion is just so small and niche feature that it probably doesn’t have much priority, and that’s why we don’t have it yet :wink:

It’s also very easy to replicate using nodes.

That’s not true dispersion, and what you described can already be very easily done via the node system. Real physically-based dispersion does indeed require spectral wavelength sampling across far more wavelengths than simple rgb.

It’s still kinda doable. I converted a range of wavelengths onto their respective RGB values using an online wavelength to RGB conversion tool - and added these values to a colour ramp. I then use the brick shader to generate random greyscale values that are modified using math nodes to convert into wavelength numbers. These wavelength numbers are then ‘looked up’ on the colour ramp to drive the colour.

A similar thing happens with the IOR - but this time the math nodes are controlling the base IOR and the dispersion amount.

By using a setup like this - you are essentially generating a continuum of colours corresponding to wavelength - and you are also generating a specific IOR per wavelength too.

This IMO gives a much more pleasing and gradual dispersion effect than simply mixing raw RGB values and three IORs

Note: I have chosen to go down the colour ramp route - because the inbuilt wavelength conversion node in blender doesn’t seem to deal with violets very well.


1 Like

Moony could you share the blend file?

How do you define true dispersion? What I said is what the production renderers internally do too. It’s the principle of the dispersed light transport. While spectral renderers may do it slightly more accurately, I can bet you would not be able do visually tell which scene was rendered with a spectral renderer, and which one was rendered in a non-spectral renderer that supports dispersion (Like Corona or V-Ray). The images side by side would likely look slightly different, but by no means one would look significantly better than the other. What’s the point of creating more complicated and more limiting renderer when the result is indistinguishable?

Have you ever noticed that there’s not a single spectral renderer widely used in production? :slight_smile:

My reason to contribute to this thread was simply to call out false assumption that a renderer needs to be spectral in order to give users ability to render refractive materials with dispersion at a quality level sufficient for production. It most definitely does not. It sounded more like an excuse. Believe it or not, even the renderers used by people who do high end visualization of jewelry for living are in vast majority of cases not spectral ones.

But we need to think to the future, be forward thinking, as there will be vast changes in the world of computers in 5 years or even beyond, hopefully in the form of AI, the end of silicon and the transition to many new paradigms of computing eg, neuromorphic, graphene or carbon nano tube base microchips, photonic based chips, 5G cloud computing, quantum chips and tensor processing cores. These changes hopefully(lets all hope) will land us with VASTLY more computing resources to throw at things like rendering engines, and when the devs know they have all this excess power i think they will jump at the chance to go full spectral, bidirectional or photon mapped raytracer for cycles, and probably in real time whilst full editing is happening. :slight_smile: Because why not.

…if we had future technology that could render that much faster, we could just throw 10 or 100x more samples into a path tracer and brute force our way through it. Simpler code on more powerful hardware can be more efficient than extremely complex code.

yes, but cycles is not spectral, so no dispersion effects would show even with 10 billion samples.

Just be careful with the quantum computing magic bullet nonsense. By the time that technology is mature enough to be usable, cycles will be hopelessly out of date, and you will need a new renderer anyway. Let’s not sacrifice usability of current technology for the faulty idea of future-proofing.

I do agree that we shouldn’t rely on hypothetical future technology as a driver for development, but there has been a few papers on importance sampling techniques (for spectral effects) that can be applied today (if Cycles had the ability to do the calculations).

One recent paper for instance showed a dramatic increase in the convergence rate for things like chromatic dispersion, such techniques should be applied as part of an initial implementation of such shading.

Shifting everything over to full spectral calculations would be quite the undertaking. It’s not just the renderer, all the materials need to take complex IORs into account.

Spectral rendering, much like caustics, is a special effect which is rarely needed, and if needed, is often easily faked. Cycles is intended to be a fast, semi-realistic animation rendering engine. Bolting on more and more features to make it more realistic will take away speed, and lose focus on the intent of the engine.

If you want a super realistic, spectral rendering engine with caustics and bi-directional support, use mitsuba. Or yafaray. Or luxrender. or any other rendering engine that is built with that end in mind. Trying to make cycles into something it isn’t doesn’t really help anyone.