My thoughts on adaptive subdivision in Blender 2.8

So the new 2.8 beta release is finally stable, so I decided to execute some tests. One of the most desirable functions was the adaptive subdivision. Now it works fine and was given new features, so I want to share my thoughts on it with you. The first great feature is the Lookdev view in the viewport. Although it doesn’t concern the adaptive subdivision directly it is still a great thing for previewing your materials and lighting in realtime. It switches the rendering engine in the viewport to Eevee, even if you have Cycles selected as the render engine. It also disables the adaptive subdivision, and material displacement, so the computer doesn’t have to spend its computing power for tesselation, and you can instantly view how your diffuse, normal, metallic and roughness maps works along with the lighting/world settings.
Another great feature is the new Displacement node in the 2.8 shader editor. I think it was the most lacking feature of 2.79 microdisplacement. Now it has the midlevel and scale sliders, like the regular Displace modifier. So - no more messing around with math nodes for the displacement. Another difference is that the new material output node in 2.8 has the displacement input marked as vector (purple) instead of value (grey) as in the previous versions. And the Displacement node has also another vector input for normal. I haven’t played with this input for now, has anyone done it yet?

Third and just as important as two previous ones feature is the new Subdivision tab in the render folder. It allows us to manually set the dicing rate for render and preview mode. Previously we were limited to set preview dicing rate as eight times the rate for rendering. It also gives us access to other values as offscreen scale and max subdivisions. There is also an option to choose the dicing camera, but I’m not quite sure what the purpose for that is.


I did the tests with this artwork, published here on artstation.
The entire mesh is a simple cube with bevel modifier on the edges. The rest is done by displacement map.

In conclusion I think that the adaptive subdivision is now much more comfortable tool with the new functions that came along in 2.8. Unfortunately it is still a memory and computing power devouring feature. Rendering a scene in which most of the frame is filled with tesselated mesh, with 1px dicing rate nad 1080p HD resolution is a real torment for both CPU and GPU (the mesh is tesselated by CPU, before copying the scene to GPU - my guess when I got the message “copying to device” during scene building pre-render process) The picture with two ornamented planks in 1920x1080 was rendered over 13 hours on GPU (my configuration: AMD-FX 8350 8core CPU, 16 GB DDR3 RAM, RADEON RX-580 GPU, 8 GB VRAM).


I hope the upcoming 2.81, 2.82 and further releases will have better memory management, and the adaptive subdivision will be finally adapted for Eevee.

1 Like


Your GPU render is slow because the total mem usage is actually a bit more than what the Mem text in the top there is showing.

Go to the status bar on the bottom and you’ll find the likely culprit is your GPU running out of VRAM and needing to move data back and forth from the system RAM. If the displacement fits in memory, the rendering then should be even faster than if you used a bumpmap.

The new Radeon PRO DUO 32 GB DDR5 VRAM is already purchased. Switching from 8 to 32 gigs of RAM should solve the problem. And the main reason is that almost entire frame is filled with subdivided mesh. In the previous render (where the plank goes diagonally through the frame) I hadn’t this problem. The only thing that I don’t understand is why Blender is setting the number of threads to 8 with GPU rendering selected and number of threads set to “Auto detect”. It looks like a bug - Blender takes for acount the number of CPU cores, even when it is swet to GPU rendering.

Just want to make a few things clear…

Has it not been working fine for anyone before? I’ve been using the feature for over a year and it never caused any trouble…

That’s good, but it isn’t a feature that was newly developed. The lookdev and workbench engines are really just eevee, but with less features to give you a quick, reliable rendering of your model. And since realtime tesselation isn’t a thing in eevee, it also doesn’t work in the lookdev/workbench.

This node is designed to replace the bump node as far as i know. The reason its output and the displacement inputs are vectors is that 2.8 supports vector displacement using a vector displacement node. In order to not make things over-complicated, they just made the material displacement input and the displacement (new bump) node vectors as well.

That’s incorrect. We’ve been able to change the dicing scale multiplier ever since the feature was first intruduced, but it’s more visible now.

I may be wrong, but isn’t 1px at full hd insane anyways? Aside from the fact that you never need a 1px dicing scale, wouldn’t that mean one polygon is subdivided to the size on one pixel? That would be ~20 million faces if the mesh covered the whole canvas. Or am i understanding that number incorrectly?

It was working fine with 2.79, but crashed in 2.8.

It worked fine with “game ready” or baked textures. But with complex node setup it crashed often. I guess it was some issue with computing the input texure map from nodes both for cycles and eevee. The first beta release crashed even when i tried to bake textures in cycles. The release from 27.12.2018 did not crash anymore.

For now both bump and displacement node are present. And each of these two does something else. Bump generates the normal map (yeah, the purplish one used in many game engines to alter the direction of light reflecting from surface) The displacement node is used for physical deformation of the mesh along with tesselation. Giving up the old bump node wouldn’t be wise, because it is much better to mimic the tiny surface irregularities and doesn’t need tesselation. Just use the the node wrangler to see how different are the output frm bum and displacement nodes.

It is realy far more visible, because I couldn’t find it anywhere in 2.79

If it was insane woud it be the default value for render when you’re adding Subdivision Surface modifier and switching it to adaptive mode?

Not ~20 but ~2 milions. 1920x1080=2 073 600 :slight_smile: It is still a huge number, but there are hardly any cases in which the tesselated mesh covers the entire canvas.

Wrong! In almost most cases the mesh would be jagged and look ugly. Thats why the devs set it as a default value.

Here’s the discussion on the dicing camera and falloff parameters:

Normally geometry that’s outside the camera view would not be diced, so there are issues where that geometry affects the scene indirectly, or when it gets diced differently at different times during an animation I think. The Dicing Camera is the one used to determine if geometry is visible and thus needs to be diced or not, so it can be set to observe a larger area say than the render camera, thus effectively holding the dicing state in place.