Oren-Nayar in the Principled BSDF shader

Did they ever add oren-nayar to the principled bsdf shader?

1 Like

No news from when, two months ago, Alaska wrote about it in the PR :pensive:
It’s still marked TODO there. Slowly Lukas is going on with adding features though (thin film was just added), so who knows…

Is it needed? It’s in diffuse, which fits the purpose nicely as “that kind” of Oren Nayar appear to only be able to simulate very rough surfaces like powder. The shading model have been refined, combating some of its problems. Think I’d rather have the old Disney Diffuse (used in Principled v1) back which is also refined Oren Nayar ('ish) and more suitable for most purposes.

There was no proper diffuse roughness with the old Principled Shader, the old “roughness” was a hack by adding a velvet shader on top of the diffuse (with the parameter determining how bright the color is). Load up an old build and set the roughness to something higher than 1 to see what I mean.

Because of that, using the Principled Shader alone could not produce convincing surfaces for objects like stone like you could with the building blocks. Oren Nayer really makes a difference in more cases than you think.


Oren-Nayar would be nice indeed. But fixing bump mapping would be even more important. Not sure if that is worked on already.


You sure? You have a diffuse+velvet setup that produces the exact same results as Principled v1? I still have 3.6 around (for import stuff). I was always under the impression that Principled - that was called Disney Principled initially - was based on Disney Diffuse that also considers exit IOR. Of course given they could never get GTR distribution correct for the topcoat lobe (now uses GGX), I wouldn’t be surprised weird shortcuts were taken.

Not saying Oren Nayar don’t have its uses, it’s just not what you’d want to use for rough diffuse surfaces in most cases. Disney Diffuse is much closer to MERL database. And the Oren Nayar we have in Diffuse is kind of out dated, some of its problems have been fixed these days (i.e. Hanrahan Krueger).

I think it would be wise to implement whatever other renderers have implemented to represent Disney Diffuse (rough diffuse) for the sake of some unity, rather than go with an outdated Oren Nayar.


This ?

This seems to have more to do with SSS and the future Thin Wall function.

No. Oren Nayars main problem is related to darkening of forward scattered light. Think crescent of backlit moon. I’ll see if I can find an example paper someday, but right now looks like search engines are having an issue.

Disney’s Principled Shader though prioritizes the ability for the artist to direct the look of the scene and not physical realism, it even says so in the original paper. There have been numerous threads talking about the issues people have had with the older implementation if they wanted to create a realistic scene instead of something akin to a Blender Open Movie cartoon.

In addition, I am not sure the darkening produced by Oren-Nayer is really that much of an issue these days with render engines no longer designed around simple RGB shading and lighting within a standard 0-1 sRGB VGA colorspace (even though early top-shelf engines like Mental Ray still wowed us when placed in the hands of a skilled artist).


IIRC this is the exact same problem as GGX: The original model assumes a single bounce.
There is an upgraded version of Oren Nayar that’s essentially the same type of upgrade as Multiscatter GGX, fixing all of that.

IIRC that’s this:


Maybe it’s related, I wouldn’t know. But that paper is pretty recent (2021). The problem with Oren Nayar not matching measured MERL data has been know since pretty much forever, and is what Disney attempted to fix way back when (2012).

Turns out Hanrahan-Krueger is indeed related to subsurface (I though it wasn’t), but is apparently also applicable to diffuse (Oren Nayar) on its own. See figure 8 on page 7.

Just my own test using 3.5 for Principled v1:

If Velvet is used, I think it is used differently that I’m using here; simply adding diffuse 0.8 with velvet 0.2. Point remains though, Oren Nayar doesn’t match measured data from MERL database, needs to be addressed, before adding to Principled v2. I expect all other renderers to have some sort of fix for this.


Yeah, it’s quite recent. I’m hoping this shader gets supported.

It’s basically what happens if you go into the limit of infinitely many infinitely small spheres, each of which has a Lambert shader.

There are two versions of it, an analytic “fully correct” one (top) and a faster one that’s only an approximation (bottom)

And here it is (third row) compared to a few others (Lambert, Oren-Nayar on top, Mirror sphere (I think that’s equivalent to GGX at max roughness) on bottom

It automatically gets a slight edge of brightness which, I think, the Lambert+Velvet approach attempts to approximate.

It’s interesting how much flatter and dustier things look with this shader, though the mirror sphere variant is fairly similar. Mostly the difference between those two seems to be that this Lambert-sphere BSDF has a very defined edge highlight


What’s the point of this? They all look similar. Which one is fastest to render?

1 Like

I mean, they are all diffuse/albedo. And Lambertian is the fastest as it’s the simplest. But it looks very different. Check out the close up details of the dragon.



Lambert Sphere:

Lambertian has the clearest details but also is the lowest saturation.

And on the neck:


Mirror Sphere:

Lambert Sphere:

Mirror Sphere and Lambert Sphere have a much more defined rimlight. Lambert-Sphere also reaches higher saturation levels than Mirror Sphere.

Mirror Sphere and Lambert Sphere are effectively approximating the appearance of dust or porous materials. In case of Mirror Sphere the base material is mirror-reflective, as if you had like a bunch of randomly piled microscopic beads, whereas the Lambert-sphere variant amounts to microscopic beads that themselves are Lambertian. Both of them effectively are equivalent to very very shallow SSS with, I think, very strong back-scattering.

Certain types of rock or other materials will look much more like these shaders than a regular Lambertian.


I mean, OK, I see the difference, some lighter, some darker, some have lighter darker shades depending on viewing angle, more saturation, less saturation… but I would adjust those properties looking at my reference and as far as I know I can do it with any one of those shaders. So there is probably difference of how it looks when illuminated from different angles, but the difference is so subtle. Doesn’t seem to matter much, does it?


These are view-dependent effects that are literally unachievable with Lambertian directly.
As CarlG mentioned above, you can hack some of that through Velvet, but it’s not the same.

And yes it is subtle but if it’s exactly what you need then it’s good to have. It looks very nice imo. Regular Lambert looks pretty odd at times.


thanks, such tests help a lot. Remember me of some Pixar Renderman papers describing the effect of very diffuse surfaces

Yes, but we have more than just the shaders alone in the Shader Editor. I wonder what exactly those unachievable effects are. View dependent brightness or saturation variation is easily achievable and very easily controllable. Sure, it’s nice to have different shader models, why not?.. But I just don’t see such a big deal and I also think there are plenty of less subtle stuff for the devs to work on as well. It would be interesting to explore if this intuition of mine is fully right, or am I maybe mistaken and you really cannot replicate some of the effects any other way. At the moment, I really don’t see anything that strikes me as not possible to replicate with current shaders in the examples.

Sure, you can hack together anything at all with some random Fresnel or Lightpath shenanigans. It’s gonna be difficult to get a close result that still achieves desirable things such as energy conservation though.

Shaders also operate on a level that you can’t actually reach with light paths and the like.
For instance, you can approximately emulate “Fresnel with roughness” by hacking the normal that you put into the Fresnel node to become more and more parallel to incoming as roughness increases, and it will give you decent results.
But what you’d actually have to do is to get the normal of the simulated microgeometry that GGX is built on top of, and that’s data you simply do not have direct access to at all. No node you might pick is able to rely on this. – The Principled Shader, being a holistic shader, takes this into account though.
It is impossible to exactly rebuild Principled with the various shader components and light path nodes and such we currently have. Approximate? Yes. Match? No.
The Mix and Add shaders don’t mix things with such underlying data in mind.

Similarly, with a shader like this you are technically simulating light randomly bouncing through thin but complicated tunnels between your porous material.
A good Velvet shader actually does something related: It does that kind of statistical limit for tiny but dense surface hair (i.e. Velvet) rather than clumps of very fine dust. That’s why you can use it as an approximation, as it’s a related effect, but it’s just not the same.

And if you do some crazy more involved light path / Fresnel / Normals trickery, you probably can get close with some pretty complex setup that will likely have a bunch of ugly edge cases which you’ll have to account for.

Shaders are just so much easier and more robust from an end user perspective.


Match to what? Reality? No. Some new shader model, sure. :smiley: