There’s still the other method of rendering detail through the bump node and the shader normal input (should you decide there’s parts of a material you don’t want to actually displace).
Just make sure you don’t have those details in the chain of nodes leading to the displacement output.
What I mean is that the high amount of subdivisions leads to sharp edges.
The idea of “Sculpting with UVs and Displacement” was to find a way to create displacement textures and UV layouts that do not need and absurd amount of subdivisions. The parts that still would look too low poly were hidden with bumps and AO textures.
This balance could also smoothen less than ideal textures. If there is a sudden drop from white to black, the displacement modifier would only find a couple of verts to push, leading to “ramps” rather than sharp edges.
Now the adaptive subdivisions create a denser mesh, which removes those “ramps”.
Even worse I think it actually splits the verts.
You can see what I mean if you compare the two images. Look at the bigger, smooth elements right next to the nose.
The old version is kind of smooth, the new version is literally cut off at the edges.
(The AO version makes it even more clear)
I added the same displacement texture onto itself, just scaled 5 times.
This works really well and the level of details is just awesome.
But take a close look at where the scale of the UV and the “flow” of the geometry changes. The lower poly, old method didn’t result in those cuts.
Now, I’m not complaning. I just have to either find a setting that doesn’t produce such a dense mesh and/or be more careful with the textures.
That would be more to do with the fact that there’s a break in the coordinate data rather than a bug with the microdisplacement (and I don’t know of any rendering solution that can easily handle that without the need for a transition zone).
Well, I’m just trying to figure out if and how I can use the feature for my style of images.
If I have to use a combination of displacement modifier and microdisplacement of just have to “sculpt” better transitions that remains to be seen.
Right now I’m opening random old files and keep being amazed about the details I can add to them.
@nudelZ, thanks for sharing. I’m still trying to figure out how it works
@Sir Richfield. I do not know if you had already experimented with new microdisplacement. Just in case, you see here for options:
There are also more options in Render tab > Geometry.
Edit:
Ok, I think If you are already experimenting with it then you already know where the options are. Perhaps the only options you could have missed are those under Render tab > Geometry
Hey all,
I’m making a tutorial and I don’t want to give out the wrong information. So can anyone help me understand what microdisplacement actually does?
I understand that it subdivides based on distance to the camera, and then adds the displacement at the time of rendering, but does it do anything else? Because I’ve noticed it gives wayyyy better results than the old method of subsurf + displacement modifier. Even with extreme levels of subsurf, this method seems incredibly more detailed (and even when only set to True, not Both).
So does it do anything else?
Also, am I correct in saying that there are actually two separate features here: Adaptive Subdivision and Microdisplacement, and they just happen to be in the same release?
In terms of adaptive subdivision this is my understanding
It doesnt necessarily do it based off camera distance, but rather the number of pixels it takes up on screen… thats what the dicing rate does… at 1px… it will create one triangle per pixel at rendertime. if its 2 pixels, it will create one triangle per two pixels etcetetc
In general, objects closer to the camera are taking up more pixel space… which means they will be subdivided more (so that it will end up at 1triangle per pixel)… and objects farther away take up less pixel space so they wont be subdivded as much.
So the differnece between subdivision and displacement. is subdivision is in charge of subdividing the object… displacement is in charge of moving the pixels around. Adaptive subdivision needs to be aware of displacement though due to the the object taking up more / less pixel space due to the displacement map
It doesnt necessarily do it based off camera distance, but rather the number of pixels it takes up on screen… thats what the dicing rate does… at 1px… it will create one triangle per pixel at rendertime. if its 2 pixels, it will create one triangle per two pixels etcetetc
So does that mean that the higher res your render, the more subdivisions used and therefore more memory usage used?
Correct… However, under the geometry tab in the render settings there is a max subdivision level… so you can set the maximum number of subdivs it will do on a mesh.
One more question: is this actually micropolygon displacement? I asked Campbell and he said that he doesn’t think it is, but he admitted he hasn’t been following the development closely.
To quote him:
How micro-polygons work for a ray-tracer AFAIK isn’t as well defined -
from what I’ve read ray-tracers make different design decisions to
attempt to support this… each with its own pros/cons.
(Caching micropolys to disk, generating on demand… only using for
first bounce… etc).
I haven’t been following development much, but from what I’ve read, Cycles is just doing adaptive, viewpoint based
subdivision, so its not really micro-polygon
(the term micro-polygon isn’t used in the release logs for example).
Afaik it’s not Micropoygon as in REYES.
This is recursive subdivision of triangles based on their screen-size, which gets done at rendertime. By OpenSubDiv if you will.
It is using micropolygons in the sense that the polygons are small relative to the size of a pixel, which I think is a reasonable definition of the term. But of course with that definition you can also use a plain old subdivision modifier to generate micropolygons.
What it’s not using is the Renderman REYES algorithm, where the term micropolygon originates from and what it is typically associated with. The REYES algorithm lets you render micropolygons with very little memory usage. But it does not work well with path tracing (and was even dropped by PRMan in their last release).
Currently all polygons are kept in memory, though some kind of geometry cache might be introduced in a later Blender releases to reduce memory usage.