No procedural materials. The weird is that models with similar materials often work and often does not.

Right now, the biggest drawback i can see with Cycles-X is no Branched Path Tracing. I am assuming that this is removed for better sampling options in the future? however if you are using many lights, it is a significant regression to normal cycles.

From what I understand breckt feels that he can make one integrator that will make branched path tracing obsolete.


Many-light sampling should be able to remove that sampling regression and then some (because for starters, it would also work with emission materials).

The point is to sample each shading component and every light in a smart way rather than just brute-forcing everything (which is to sample every shading component and every light for every sample, regardless of their contribution).


Cycles X IS ABSOLUTELY DOPE! :smiley:

Guys, whatever you did smoke, i also want a piece of it! This thing is so freaking FAST AND SNAPPY i got an 8 hours RENDERGASM! :smiley:
WOW!, just WOW!

Just for fun I tried it on CPU only, and it is still snappy. You have the feeling that you work on an older GPU, while being on CPU!!! :smiley: DOOOOPE!

I am already migrating to Cycles X, and only if i can’t use it i go back.

Now only 2 wishes remain…

  1. A GPU domainless fluid/smoke sim like nvidia FLIP VDB voxels, and
  2. a good caustic solver for Cycles X.

Then i have no more needs at all. :smiley:

You are amazing guys!


This is the Cycles X we need, but not the one we deserve. :smiley: Absolutely AMAZING!

The SSS is so freaking FAST I can’t handle it! :smiley: ITS INSTANT!

I for one, welcome our Brecht & Co overlords… :smiley:


That’s what I call enthusiasm!
Now change your nick to XselcyC… :wink:


Are you sure, I think we need the missing features back first (so many of my scenes use both volume shaders and AO Color/Bevel shading).

1 Like

AHHHhhhhhhh you got me there, I was happily rendering out an image with cycles x, browsing the forum while I wait and after reading this realised I was missing my bevel node grunge edges! Quit render, back to 2.92 and starting again. :sob:

Hopefully at some point there will be a chance to integrate this:


Sebastian is already working on that, from weekly meeting notes :


As much as i want real time fluid in blender, the UX of mantaflow is horrible, he should fix that before attacking new features.

Sorry for the divergence from the topic

What’s wrong with it? It’s at least way better than Maya when I used it the last time maybe three years ago. Clunky as hell!

I think those are two different things? The one in the Realtime GPU smoke simulation thread is not part of Blender atm.

Yes i only meant that Sebas have already that topic in mind ( real time fluids ), btw we should refocus on the subject here

Way better than Maya might not be the glowing endorsement that it once was! :slight_smile:

Most modern renderers can automatically enable render-time tessellation in order to achieve whatever subdivision level is necessary to properly render displacement maps. In the case of 3Delight, it’s not even necessary to enable any specific setting – it just does it automagically. Octane, Arnold, Redshift all have simple switches that enable this functionality.

In Blender (unless I am completely overlooking something), in order to enable proper render time displacement there are several steps that need to be taken, in this order:

  1. Connect Height/Displacement map to Displacement node and Displacement Output
  2. In Object Settings change Displacement from Bump Only to Displacement Only (or Displacement/Bump).
  3. In Object settings, add a Subdivision Surface Modifier
  4. In Render Settings, change the Cycles Feature Set from Supported to Experimental
  5. Go back to the Subdivision Modifier and enable Adaptive Subdivision

Did I miss something?


Yes, I agree that this process is embarrassingly overcomplicated. Getting displacement up and running should be a matter of simply plugging something in the displacement slot in the material output node.


You forgot that you have to set dicing rate or subdivision level and/or dicing camera.

The feature is considered experimental. So, developers are not proud of current situation or against changing actual workflow.

But it is not obvious to know how to improve the workflow.
Each setting is placed at a pertinent place according its effect and it is corresponding to a need of customization.

  • We have different displacement nodes for different displacement map types (bump map, normal map, displacement map, vector displacement map). That is not really something people would like to change.
  • Dicing rate or subdivision level is a choice that had to be done by user.
    It is not because you are using a displacement map ; that you, automatically, want to use adaptive subdivision.
    You can’t set the dicing rate per node. That has to be managed at mesh/object level.
    It is in balance with subdivision level because it is replacing it. So, that is logical to set it in Subdivision Modifier or with global characterization of Displacement type.
  • But the global characterization of displacement type (Bump Only, Displacement Only, Displacement+Bump) : that is clearly the part that could be automated and suppressed based on nodes used in shadertree or moved to Simplify panel.

Only developers who worked on displacement are Brecht Van Lommel and Mai Lavelle. Mai has not been active on d.b.o, since two years. And if Mai was working on something, reviewer would be Brecht.
So, with Brecht occupied by Cycles X and probably with GSOC and Blender 3.0, it is not something that would happen, soon.
Vector displacement baking is requested since support of vector maps by displacement modifier.
And that has not been a priority for Brecht since years.
He would probably spend a lot of time on making Volumes rendering possible, as announced on blog’s article.


Well yeah I think it’s a bit strange that displacement is still experimental, but personally I’ve set that as default anyways so I have only three steps, and if one somehow could have that Object Setting set as default as well there would only be two steps. So yeah sure it could be a bit easier, but it’s not really that complicated at all atm.

I have rendered plenty of scenes with displacement and from my point of view, there is a good reason it is still experimental. I had scenes where the results are not consistent when I render them with 800x400 and 1600x800. Meaning the larger one could show a quite different shape and not just in the details.

Dicing rate is per-pixel so yes it’s going to be different when changing your render resolution.