@devs > Biased Cycles

There is the upcoming Disney shader (which essentially is that uber-shader), but it’s not yet committed to Master and the developer seems to not have much time at the moment.

People talk about Cycles shading lacking effects like aberration, but you can actually approximate those effects due to the building-block approach the engine uses. You find an entire world of possibilities once you discover the ability to abuse vector and normal data so as to create effects otherwise impossible (such as aberrations and glinting).

Most people by now will simply use Cycles by way of building up their own library of node-groups (which makes things much easier and faster).

It’s all fake anyway. If you wanted to be anywhere near physical correct, you’d have to model everything as a volume to begin with, as every dielectric material involves some degree of subsurface scattering.

Anyhow, the initial question was “Why doesn’t anyone write an irradiance cache for Cycles?” and the answer is “because it takes someone with skill and time, and the ones who have both are busy already”.

I had a long story, but I’ll make it short :slight_smile:

At the moment for me personally, physically correct means on the specular level there is no faking of reflected highlights from light sources, they’re just regular reflections from particularly bright spots in the environment (even though we still have models, like ggx, which still doesn’t do everything correctly). 20-25 years ago, physically correct meant we had actual raytraced reflections/shadows and not some fake reflection/shadow map applied. Remember this? :smiley: Specular nowadays are fairly unified (disregarding how photons actually works), but the other side is still a mess. We use diffuse+specular for most things for efficiency reasons, but in physics diffuse doesn’t really exist (everything is really sss or better yet volume based). Perhaps even separating specular and volume is not physically enough at some point, and we have to define materials connecting atoms. But no, that wouldn’t work, since we can’t simulate quantum effects (to take it to extreme levels).

So even if we have approximations (closures) and fakes (node based) that can do a lot of things, we still lack a whole bunch (thin film iridescence and diffraction as well as volume caustics) which cannot be faked due to information not being visible to the node system and lack of ability to manually trace rays.

Physically correct is entirely in the eyes of the beholder. I doubt a gem maker, physician, or optician would agree on any of our arguments for Cycles being physically correct :slight_smile: And my own standpoint certainly have evolved over the years.

The add shader by itself doesn’t break anything, but it has the potential to do so. Energy conservation doesn’t have to happen at the shader stage, it could happen in the color fed to shaders.

My main issue with fresnel is that it doesn’t have a roughness input.

I think that heading the discussion towards “everything should be a volume, with only a density parameter. THIS is being physical!” is going a little bit too far into this. *(it’s like saying that you don’t listen to real music unless you drop your Cd’s and go to a live event). It’s obvious to anynone into CG that render engines need simplifications of real world phenomenons. So we have diffuse BSDF, glossy, etc… Job of the coder is to make them good. Then, coming to in Cycles, the artists’ job is to combine the bsdf’s to mimic real world materials. Nothing new to Cycles users.
Cycles has its shortcomings of course, but as AD says there’s a pletora of tricks thanks to its building-bricks approach. Also the infamous fresnel “bug” has its workaround
I’m all in favour of ubershaders, but surely not those with kilometric list of parameters!

I don’t care how other people “define” it, but I can see differences in lighting and materials when comparing for example Cycles and Luxrender. Maybe it’s the settings, I don’t know. But what I do know is that when I render something with Luxrender the lighting is natural and makes the scene look good right away no special tweaking required. Then again material editing for Lux is not that great.

I don’t know what settings you’re using in LuxRender, but a significant difference between Lux and Cycles is that Lux has built-in tone mapping, which is an essential component for a natural look. In Blender, tone mapping is in the compositer and not in the render engine.

It’s a good thread as a start. It’s also a valid questions whether it’s possible to add biased solutions for Cycles.

To begin with, I think it’s very apparent to understand that Biased solutions are geared to produce fast and clean GI where as Ubiased solutions are geared toward ease of use and higher rate of efficiency. So I guess to answer your question, I think Cycles developments are already in the right track. There is no need to add Biased solution to it, because to develop a production renderer that can do both solutions are very rare and difficult.

If you take a look at Red Shift or VRay, those renderer even though they have Brute Force mode, eventually they never optimal and always losing their performance compare than Arnold or 3Delight which is purely a Pathtracer. So at the end of the day, each design of any renderer better stick to what they really good at. Having too many solutions in one renderer tends to over complicate a production pipeline due to various ways of doing things, where as in Production environments consistency is the key.

And yes Red Shift is insanely fast “at certain point only”. I’m currently doing a major production using Red Shift and oh boy, all the marketing that the renderer is super fast and cheap, well to inform you that Blinn’s Law still applies. Which means Red Shift can be very fast at certain quality and complexity. Once we have to deliver that quality threshold with bigger and complex environments, the only solution was to add more hardware for render power so yes, fast and cheap becoming obsolete, because eventually even Biased or Unbiased both can deliver good images with proper render power and proper knowledge and understanding on how to use them optimally. It is very important to understand, there is no insanely fast renderer out there, because depends on what kind of production that you do, hardware requirements and efficient pipeline are still the king. For example how long does it take for Red Shift to render 2K images full with displacements, motion blur, SSS, tons of texture and physical lights using NVidia Quadro K4000 3GB? Versus render in Arnold or Cycles using Dual 6 Cores Xeon with 16GB memory? So from this example itself, clearly Cycles or Arnold will win due to higher specs of Hardware. In order for Red Shift to catch up is to discard the Quadro and buy GTX1080 which cost somewhere around $700, on top of that you still need 16GB of RAM to properly ready for any out of core memory situations. By then, unconsciously we felt like Red Shift is fast, but actually we make it fast by upgrading our hardware.

To summarize, adding Biased in Cycles is not good there’s always more ways to improve Cycle’s Pathtracing capabilities in different levels. Where as if you want Biased solutions, feel free to use Red Shift, VRay or Yafaray which clearly they focus and excel in that area.

Cheers…

It’s not physically correct until you start modeling matter as a function of time, till then I’m boycotting Cycles

There are ‘some’ tonemapping options available for rendertime (curves, looks, gamma, exposure, display device, color modes, and whitepoint/blackpoint). The thing to note is that the curve does not have to go to a value of 1 on the right side of the window (so things like compressing highlights is possible). Personally, I made use of those settings to get a tonemap that is a bit more realistic than what comes out of the box.

Now it becomes a different story if you need something like Reinhard tonemapping (which is not possible to do automatically like in Lux).

Please try this other approach and give me feedback:
set display device sRGB, view Log, look and curves to none, then gamma = 0, rise exposure until you burn bright materials (not lights), then rise gamma until you are pleased with overall contrast. Optionally you can slightly adjust with curves, while looks are totally screwed.

In theory, both renderers use the same equations for light transport, therefore the result should be the same (after enough samples).

In practice, the materials, settings and attributes are slightly different, but the overall results will still be very similar if you use comparable settings.

Using words like “looks natural to me” is rather meaningless. If the calculations are correct, the result is correct. I’d agree that it’s probably the tonemapping in Luxrender (which happens after rendering). Maybe you’re expecting some film response or some gamma setting, or maybe you’re used to Reinhard-style tone compression (which is rather unnatural) that doesn’t exist in Cycles.

I really have to render my new project with both Cycles and Lux and then post pics to show what I’m saying. If it’s possible to get similar results with Cycles then it would be a lot easier that way. It’s not like I want to use Luxrender.

Well why don’t we try it. Both images rendered about the same amount of time, although you don’t need that time to realize LuxRender is more realistic. In Cycles I was using presets: final, full global illumination, branched path tracing. In Lux: Bidirectional, sun only (in lamp settings to exclude sky from rendering). What’s more concerning I had to use sun light, because Cycles rendered black with other lamp types which must be some kind of bug, right?

When you look at the renderings, Cycles fails basically in everything. In how light is bounced, in shadows and also in caustics which you really need when there is a big object made of glass.

http://paulkp.mbnet.fi/pics/lightingtest.png

I think you need to kill the branched path tracing on cycles and use path tracing with Lux. Then you need to get a render out of LUX with no tone mapping and make sure of the same with Blender.

doesn’t make sense. It’s clear to me that you are making something wrong here

Last I used Lux (which was years ago), there’s not really an option to set the size of the sunlamp like Cycles lets you do (they strictly stick with the realistic model which means shadows are always very sharp). The default size in Cycles will produce softer shadows and you need to set it to something like 0.010 to make it correct for sunny days (and the sunlamp in the Lux image is also brighter by the way, so the energy value needs to be increased a little).

Also, much of the remaining difference can be accounted for with some tonemapping tweaks (bringing out dark areas a bit), and the material setup (realistic materials in Cycles, at the least, use both the diffuse and the glossy shader node types).

Always put your portfolio when saying things like that. You can be the next paid Cycles developer if you really can do it.

And it is a little bit unfair to compare glass and caustics renderings from a unidirectional and a bidirectional renderer. It’s just the way the things work that a bidirectional renderer does the caustics a lot better. If cycles needs some different integration methods a unbiased method like bidirectional or metropolis path tracing should be the way to go. But both are quite complex and not that easy to implement in an existing renderer. Especially, because cycles is to a big part a GPU renderer and more code makes it often slower. But maybe the split kernel makes it easier.

Krice, try this https://sobotka.github.io/filmic-blender/

Thank you for this explanation and your own experience with rendering. This is a on-topic great post.

There is however one thing i miss, some kind of speed vs. realism per watt (for a lack of a better term). This is already done in DX/OGL. Everybody knows for ex. (more or less) what a 10 Tflops DX11 card will do vs a 2 Tflops DX9 one. Now even if we would talk about maximum optimisations, the algos will still run faster on the 10 TFlops DX11 card.

The point being, why i started this thread (and no, this thing was never IMHO seriously discussed, just thought), is that there should be a path towards a formula like in gaming but for rendering (and i am not talking about the rendering equation). For ex. we all knew that games cheat, but who cares, now the PBR viewport is coming, of course its not fully “physically based” (who cares), but regarding that, there should be a formula where we can say: ok, for that Watt and Flop i SHOULD get that level of speed and realism, EVEN if its not 1337% physically real but enough for the eye.

In short: Aside from featuritis and the abstract rend. equation, there is nowhere a way to tell what lets say in 2016 i should get speedwise across all renderers.
What we have now are 10000s of academical papers, different renderers and some implement that and this, but there is no “qualia” of algos that SHOULD be 2016 in a renderer. In DX/OGL there is, even across different PBR gaming engines.

This is what IMHO is missing when choosing the programming path, algos, etc… of building/benching a renderer. And this problem is by no means only in Cycles, nearly all renderers have that. And for many years people said its impossible. Then there was a “benchmark” Arnold (but not really, there are still things missing), it was easy to use and accurate, and all jumped on the MC wagon. Manuka, Rman, Hyperion, etc… and the rest is history.
But i dont think here is the stop, with so many papers, i dont believe that the problem is the implementation (even with limited resources), but the choosing itself. With other words we have rather a “SNAFU” situation than a technological one.

Most go for the features, ok, but what about the formula: (samples/sec + all features) per Watt and Tflop multiplied by limit of the patience until noisfree :smiley:
I hope i said it in a understandable way.