Cycles new microdisplacement testing, discussion and blend sharing

There’s still the other method of rendering detail through the bump node and the shader normal input (should you decide there’s parts of a material you don’t want to actually displace).

Just make sure you don’t have those details in the chain of nodes leading to the displacement output.

What I mean is that the high amount of subdivisions leads to sharp edges.
The idea of “Sculpting with UVs and Displacement” was to find a way to create displacement textures and UV layouts that do not need and absurd amount of subdivisions. The parts that still would look too low poly were hidden with bumps and AO textures.

This balance could also smoothen less than ideal textures. If there is a sudden drop from white to black, the displacement modifier would only find a couple of verts to push, leading to “ramps” rather than sharp edges.

Now the adaptive subdivisions create a denser mesh, which removes those “ramps”.
Even worse I think it actually splits the verts.

You can see what I mean if you compare the two images. Look at the bigger, smooth elements right next to the nose.
The old version is kind of smooth, the new version is literally cut off at the edges.
(The AO version makes it even more clear)

Another test:


I added the same displacement texture onto itself, just scaled 5 times.
This works really well and the level of details is just awesome.
But take a close look at where the scale of the UV and the “flow” of the geometry changes. The lower poly, old method didn’t result in those cuts.

Now, I’m not complaning. I just have to either find a setting that doesn’t produce such a dense mesh and/or be more careful with the textures.

That would be more to do with the fact that there’s a break in the coordinate data rather than a bug with the microdisplacement (and I don’t know of any rendering solution that can easily handle that without the need for a transition zone).

Well, I’m just trying to figure out if and how I can use the feature for my style of images.
If I have to use a combination of displacement modifier and microdisplacement of just have to “sculpt” better transitions that remains to be seen.
Right now I’m opening random old files and keep being amazed about the details I can add to them. :wink:


@nudelZ, thanks for sharing. I’m still trying to figure out how it works :slight_smile:

@Sir Richfield. I do not know if you had already experimented with new microdisplacement. Just in case, you see here for options:

There are also more options in Render tab > Geometry.
Edit:
Ok, I think If you are already experimenting with it then you already know where the options are. Perhaps the only options you could have missed are those under Render tab > Geometry

@YAFU, all but one of my posted images use the microdisplacement. I thought that was kind of the topic of this thread. :wink:

asphalt.blend (522 KB)

With this scene, in the material / settings panel, when i switch from displacement type : bump to true, blender crash

The bug occurs because of a too stong displacement distance.

Tested with the latest graphicall build and daily build 8/9/2016

Config : Win 7 x64

It happens in Linux too. Please, you report here (BF Blender | Report Bug):

Edit:
Reported here:

Please, next time you open a report. This is not an official forum and it is unlikely that developers can follow here the issues.

Looks awesome Kai (ô¿ô)

Hey all,
I’m making a tutorial and I don’t want to give out the wrong information. So can anyone help me understand what microdisplacement actually does?

I understand that it subdivides based on distance to the camera, and then adds the displacement at the time of rendering, but does it do anything else? Because I’ve noticed it gives wayyyy better results than the old method of subsurf + displacement modifier. Even with extreme levels of subsurf, this method seems incredibly more detailed (and even when only set to True, not Both).

So does it do anything else?

Also, am I correct in saying that there are actually two separate features here: Adaptive Subdivision and Microdisplacement, and they just happen to be in the same release?

Thanks!

In terms of adaptive subdivision this is my understanding

It doesnt necessarily do it based off camera distance, but rather the number of pixels it takes up on screen… thats what the dicing rate does… at 1px… it will create one triangle per pixel at rendertime. if its 2 pixels, it will create one triangle per two pixels etcetetc

In general, objects closer to the camera are taking up more pixel space… which means they will be subdivided more (so that it will end up at 1triangle per pixel)… and objects farther away take up less pixel space so they wont be subdivded as much.

So the differnece between subdivision and displacement. is subdivision is in charge of subdividing the object… displacement is in charge of moving the pixels around. Adaptive subdivision needs to be aware of displacement though due to the the object taking up more / less pixel space due to the displacement map

Thanks doublebishop! Very helpful.

It doesnt necessarily do it based off camera distance, but rather the number of pixels it takes up on screen… thats what the dicing rate does… at 1px… it will create one triangle per pixel at rendertime. if its 2 pixels, it will create one triangle per two pixels etcetetc

So does that mean that the higher res your render, the more subdivisions used and therefore more memory usage used?

Correct… However, under the geometry tab in the render settings there is a max subdivision level… so you can set the maximum number of subdivs it will do on a mesh.

Thank you, glad you like them!!

Yes they both are right. I used World Machine to generate the landscapes and for the sand ground texture i used is from RDT.

Here is one more Test and the maps:

Color map with Snow:

Color map without Snow:

Main Height:

Snow Height only:

Snow Mask:

Attachments


Very helpful thanks!

One more question: is this actually micropolygon displacement? I asked Campbell and he said that he doesn’t think it is, but he admitted he hasn’t been following the development closely.

To quote him:

How micro-polygons work for a ray-tracer AFAIK isn’t as well defined -
from what I’ve read ray-tracers make different design decisions to
attempt to support this… each with its own pros/cons.
(Caching micropolys to disk, generating on demand… only using for
first bounce… etc).

I haven’t been following development much, but from what I’ve read, Cycles is just doing adaptive, viewpoint based
subdivision, so its not really micro-polygon
(the term micro-polygon isn’t used in the release logs for example).

Anyone know for sure? Thanks

Afaik it’s not Micropoygon as in REYES.
This is recursive subdivision of triangles based on their screen-size, which gets done at rendertime. By OpenSubDiv if you will.

It is using micropolygons in the sense that the polygons are small relative to the size of a pixel, which I think is a reasonable definition of the term. But of course with that definition you can also use a plain old subdivision modifier to generate micropolygons.

What it’s not using is the Renderman REYES algorithm, where the term micropolygon originates from and what it is typically associated with. The REYES algorithm lets you render micropolygons with very little memory usage. But it does not work well with path tracing (and was even dropped by PRMan in their last release).

Currently all polygons are kept in memory, though some kind of geometry cache might be introduced in a later Blender releases to reduce memory usage.

Thank you for the thorough explanation Brecht! That makes a lot of sense.

Shame it won’t have the memory savings of REYES, but I understand not a lot can be done due to path tracing.
I’m thrilled with the feature either way :wink:

Thanks for replying! Very helpful.

it will - later!