Blender 4.x Cycles Photorealism Improvements...still not quite there for me

I spoke with Alaska about this change, and he is already working on submitting a pull request since it appears to be a straightforward code modification.

Cheers

15 Likes

Some physically based effects that has the Principled BSDF are tricky to implement in a Uber shader, like the Angular Glossiness or the rough fresnel, you can do both following the next videos:

1 Tip to INSTANTLY BETTER Materials - Blender 2.8 Tutorial (Eevee & Cycles) (youtube.com)

Blender Physically Based Shading: Fresnel (youtube.com)

but they are not perfect like those implemented in the Principled BSDF.

But in general I think the only thing the Principled BSDF is missing and it’s very important to be added is the Oren Nayar model because developers already fixed and improved all the problems the Principled BSDF has in older blender versions.

A separated roughness only for the diffuse can be a way to do it to have more artistic control.

Here another comparisson of Lambertian and Oren Nayar, I modified the exposure of the Oren Nayar because in this case it looked a bit more darker.

Lambertian

Oren Nayar


This is the custom shader I made for testing the Oren Nayar in this scene
Oren Nayar.blend (1.1 MB)

17 Likes

Sometimes it’s a renderer issue and sometimes it’s a skill issue.

There are cases where it’s indeed a renderer issue, but that only applies to either realtime or poorly written renderer that fail the basic rendering math, such as the renderer integrated in Modo for example.

Cycles is not that. Cycles is a PBR Path Tracer that does all the major things like light transport, shading and tone mapping right. Quality-wise, Cycles is on the exact same level as Corona, V-Ray, Mantra/Karma, Octane, and so on…

Yes, if you set up scenes identically in all these renderers and compare, you will see minor differences in shading and illumination, but those differences will not make your image more or less realistic. If you show A and B images to any random person, they will tell you they look slightly different, but not that one looks more or less realistic than the other.

What renderers like Octane and Fstorm do a bit better is that they default their setting, especially postprocessing/tonemapping settings to emulate realistic camera response, including not only tone mapping, but also secondary optical effects such as noise, glare, and slight blur/sharpen, instead of neutral and pixel perfect CG image output.

So if someone says for example Octane looks better than Cycles, it’s just a matter of defaults, not really one renderer being more realistic than the other.

What’s most frustrating is seeing people thinking that if they get some new BSDF model in the Principled shader, that’s what will magically transform their images from looking unrealistic to realistic. No, it won’t.

If you take non photorealistic scene which uses let’s say some older Ashikhmin-Shirley BRDF, and replace it with hypothetical new BSDF which would model the real world material shading response with 100% accuracy, the resulting image would look just slightly different, probably with some slight brightness changes on grazing surface angles. But it won’t magically replace years of accumulated knowledge in modeling, texturing, shading, lighting, postprocessing and scene dressing.

These images were done in 2006-2007:
https://denko.artstation.com/projects/vyLE
https://denko.artstation.com/projects/4dbq
https://denko.artstation.com/projects/rPRe
…17-18 years ago. Back then, renderers, and especially their shading models were super primitive. Vray still used blinn shading back then. Despite that, these images look quite realistic to this day. You can get to high realism with good skills in the aforementioned areas, but you can’t get to high realism without these skills, even if you had a perfect 100% accurate BSDF shading model.

TL;DR:
In 90% cases it’s skill issue
In 10% of cases it’s renderer issue, but realtime game engine renderers aside, this only applies to truly terrible legacy renderers like Modo, Mental Ray, Brazil, Lightwave, FinalRender, etc… And most of those are dead anyway.

14 Likes

I don’t understand what you mean when you write “before brute force GPU rendering”.
I daily use just CPU and the settings simplification was done on CPU in the same way… actually, GPU rendering on Vray was (and probably still) a bit behind the CPU in terms of features.
The great change was to hide the Irrandiance map (I think it’s available just as compatibility with older scenes) and set brute force as primary GI engine, hide the samples on material and redo the adaptive antialiaser to use just noise threshold as main quality setting.
Then, obviously, there are more settings for fine tuning for specific scenarios but, more or less, play with noise threshold and you’re happy…more or less the same we do with Cycles.

I think that the great advantage of Vray is, today, the lightcache as secondary GI engine…if devs will develop a caching system similar, I think that will gain a power tool to improve final quality with nice rendertimes.

Another impressive comparison of how the same renderer with the same textures, geometry, lighting and postprocessing (if any) can look totally different (i.e. better) by only altering the diffuse component of some (or most?) surfaces. Stunning.
And it looks like there could be movement in terms of “Oren-Nayar in Principled” soon:

2 Likes

It’s just a term for setting global samples and a noise threshold and hitting render. If you remember the Vray pre-GPU days, you could sill do this by setting samples to 100 and a low noise threshold, with both primary/secondary methods set to brute force. This avoided the need to individually tweak all the light/scene/material samples.

1 Like

Totally different? O_o Those images are nearly identical. If you put them in front of the random person, they will tell you the images are very slightly different, but one would be hard pressed to tell you which one is more realistic. They all are equally as realistic.

You know what else could make the same scene in the same renderer look “totally different” by your standards? Not changing any shading model/BSDF of the materials, but just shifting around roughness and diffuse values of all the materials in the scene randomly by about 0.05 :slight_smile:

3 Likes

We’re talking about realism and fake looking renders, where tiny nuances come to play.
The two kitchen renderings are extremely different in that sense. It’s not like the Lambert one looks totally fake and the Oren-Nayar was the ultimate goal of realism.
But they’re different enough to make you take a closer look and think “yes, the second one offers detail that wasn’t visible in the first one.”
And no, it’s not about specular roughness and not about diffuse values but about diffuse roughness which is totally missing in the first rendering. This is not random but a step closer to a more realistic representation of physical properties. Just like a displacement is much closer to how a “real” surface behaves compared to a bump map.
Nonetheless: Having Oren-Nayar for diffuse is low hanging fruit that can help a lot, and that can be totally ignored and behaves like Lambert, if you want to achieve your final look by shuffling around random roughness and diffuse values.

7 Likes

Can you imagine if the ‘final missing piece’ required for photorealistic rendering was something that has been in Cycles the whole time? Even BI had Oren-Nayer almost since the NaN days. It kind of reinforces my theory that a big part of it is people using the Principled Shader and not changing the render settings (as the former was originally more for artist directability than realism and the latter is tuned out of the box for speed than for realistic lighting).

Though you might find that Cycles Oren-Nayer is actually a somewhat more complete implementation than other renderers, most render engines actually do not use the full Oren-Nayer model because of how much it slows down rendering. This was done through a patch submitted by a new developer not long after Cycles became part of master.

2 Likes

Honestly I thought your example was some kind of 90ies fractal ‘landscape’ rendering in psychedelic colors.
Not sure there’s an intrinsic telltale hint of a depiction of ‘reality’ here.

greetings, Kologe

1 Like

The kitchens are different, but do we have a “certificate of photorealism” to conclude which one is “more photorealistic”? Or do we say which one is “better”?
And which one is better - there will be 10 people and 10 opinions. Especially if you don’t show them 1in1, but show them one by one.
And if you say, “the one that is worse in your opinion was in octane”, a person’s brain will turn over, he will not sleep and will change his opinion in two days.

edit
I took a closer look at the kitchens and I have to say - I like the one with higher contrast and brightness. Hard to determine the merit of bsdf in this case.

2 Likes

“More photorealistic” might be something that can be measured or evaluated in some way, but “better” is nothing to discuss about. This is extremely subjective and not worth spending endless hours on unless that activity itself brings you some joy. And of course any bias has to be put aside as well.

1 Like

Photorealism and physical correctness are different things. How do we measure photorealism?

It boils down to definitions, doesn’t it?. Photorealism by its literal definition is a pretty useless term and most people mean “physical realism” by it. So in most cases the two terms are interchangeable. Let’s not get philosophical as mentioned above.

1 Like

Hmm. Then we should address reality. What a circle is. An infinite number of points. What is the math behind ANY rendering? I think it’s a little different. In that vein, none of it is physically correct.

and what does “physical realism” mean?

But here are two, let’s include them in the discussion, shall we?

https://www.mitsuba-renderer.org/
https://luxcorerender.org/

Can we conclude which bsdf is physically realistic and which one we like better? People can define the second, but how will it correlate with the first?

Realism. A highly detailed model will be more realistic than a low-detailed one. But not always. But in general. Thus, nanite gives us photorealism. But without path tracer. Thus, displacement without subdivision in octane makes it possible to obtain a highly detailed surface. These are the facts.

Good assets give realism. Good job with the light.
We must believe what we see. We don’t care how it was obtained.
And that’s why I wrote many times above, the renderings are different, as is the different workflow behind them. But this does not depend on their technical limitations.

Unless in laboratory tests comparing pixel to pixel.

Otherwise, the path tracer does not limit us. The question may be what is better, what is easier to achieve results. But no one here discusses HOW it works, they say “does it work?”

The photo can be without blur. Everything is in focus. With our eyes we see in the opposite way focused.
In photography, drawing, and 3D we use compositions and perspective. If it’s broken, we won’t believe it. And then they say “the balls have a lighter right side, so cycles is not photorealistic.” Someone compare a few other renders, I’ll give you this scene. Maybe we will get two more different results, then what will be the reference?

1 Like






1 Like

I’ve seen more of these than I ever wanted and I’ve never seen a fractal shader that can create these patterns and this kind of distribution of contrasts and values.
Simple proceduralism without physic based simulations will never get you there.
Even with complex simulation tools and other complex rule based procedural methods layered on top, you still would have to use your artists brain and exercise artistic control making artistic decisions.

Even without color, there is more than meets the eye.
How much of your time, effort and expertise would it take to painstakingly recreate these shapes alone in black and white, or something equally detailed that appears as natural?

Yes, but we can get closer each and every day by becoming more skillful technicians, more experienced artists and using better and faster tools.

3 Likes
1 Like

Everything would be fine, if it were not for some buts.
Thus comparing corona and octane one can come to surprising results, due to sampling. And - is this a test of the artists, or their tools?

But I don’t want to argue or prove it.
I’m instead in opposition to the idea that it can make at least some evidentiary sense. Even a non-digital drawing can be photorealistic.

The problem is that this discussion goes as follows:
octane better → octane realistic → octane correct
or
octane realistic → octane better → octane correct, etc.

1 bsdf is better than 2 bsdf → 1 bsdf is more realistic than 2 bsdf
or
1 bsdf is more realistic than 2 bsdf → 1 bsdf is better than 2 bsdf

I don’t see the “measurements” of realistic, photorealistic, physical realistic yet. Everything ends up coming from “better”, “like”, “don’t like”, and then smoothly devolves into " it’s physically realistic because I LIKE it".

4 Likes