Hi, it’s my first time using the adaptive subdivision feature (experimental) in blender and it’s amazing. From what I understand, the feature allows your mesh that’s closer to your view/camera to have more details and objects that’s far has less detail. Correct me if I’m wrong, Isn’t this similar to nanite in unreal? I mean, this could be a huge improvement to blender for large scene creation to be optimized. I’m surprised that this feature is still experimental 7 or 8 years. Blender should focus on this feature to improve it, or this feature will be ignored and buried to the ground. It should be the focused on Strategic Targets 2025
It’s not quite the same method as Nanite, in a way it’s even the opposite. Adaptive subdivision works by adding geometry to objects the closer they are, Nanite works by removing geometry (like the decimate modifier) the farther an object is. There can be a bit of confusion, because Unreal supports both methods.
Nanite could be understood as being similar to a typical game engine’s LOD system, but with an extra feature that splits objects into smaller sections so you can have different detail amounts on different parts of the same object at the same time.
The workflow for each is a bit different. With adaptive subdivision, fine details are done using a displacement map. With Nanite, you can start with the high resolution mesh with all the fine details and it will get reduced with distance.
Adaptive subdivision is pretty much fully working, it’s just that the user experience and interface are a bit rough and unintuitive. I have seen few people who seem to fully understand how it works.
There is a UI issue that you should be aware of: the traditional subdivisions and adaptive subdivisions are both applied to the object on top of each other and this can cause performance problems. When switching a subdivision modifier to adaptive, you should set the regular subdivisions to 0 before switching to avoid this issue.
An other thing that can be problematic if you don’t understand it is the fact that the dicing rate of meshes is based on the render’s resolution. If your dicing rate is set to 1, the object will be subdivided until each polygon is 1 pixel wide in the render. Set it to 2 and you will get a lower amount of subdivision, because every polygon will try being 2 pixels wide in the render. This isn’t necessarily a problem and is even pretty logical, but if you increase the resolution of the render, the amount of polygons will also increase because of this pixel based nature and you could suddenly run out of memory. If you were to double the render resolution, you would need to also double the global dicing rate in the main render settings to get the same amount of geometry.
Thanks for the reply and explanation about adaptive subdivision. I hope in the future it will be improved.
This render option keeps me getting out of memory. I was playing with the settings in the render tab > subdivisions > max subdivision is set to 12 by default and I’m running out of memory when I’m too close to the object. I set it to 4 and it runs smooth with great results.
12 is rarely needed, unless your object is very low on detail before subdivision and you need fine displacement. It can be a good idea to limit that value to something smaller.
But if you are running on memory, there are a few other things to check.
First, make sure your dicing rate is set to something reasonable. You would hardly ever need it below 1, as that would make the polygons smaller than 1 pixel on screen and would make very little visual difference. If it’s just for subdividing a smooth mesh (no micro-displacement), 3 or 4 should be fine enough.
There are 2 dicing rates, one on the modifier and one in the render settings. The 2 act as multipliers for each other, so make sure to reference the “final scale” message on the modifier. You could easily get a value you didn’t intend if you tweaked both rates.
If you have an object that already has adaptive subdivision applied, try deactivating it on the modifier to verify that the non-adaptive levels are set to 0, like I explained previously. I once was running out of memory and that was the cause, because 2 extra levels of subdivision were being applied on top of the adaptive levels.
Adaptive subdivision would be as useful as Nanite if it was in Eevee.
It would be useful to have, it would help detailing continuous surfaces like large walls or landscapes.
But in the meantime, I would say the options we do have are already quite powerful if you know them well.
@masterofnone, If you like adaptive subdivision, I am going to ask if you know about instancing? It’s a very powerful technique that can build huge scenes, especially if you render in Cycles.
Instancing allows you to reuse the same mesh multiple times without needing to store multiple copies of it in memory. Let me do an example. Here is a wood plank with 2 millions triangles, it would be unrealistic to make an entire fence out of it by duplicating it 20 times.
However, if I duplicate it using alt+d instead of shift+d, it will create instances instead of duplicates. The same plank will have its data reused and the polygon count of my scene will still read as 2 millions rather than shoot up to 40 millions. The performance is still surprisingly good.
You can see the same mesh data is being used by 20 objects.
However, this only works if the instances don’t have modifiers on them, modifiers force them to be counted separately and cancel this memory saving effect.
Cycles works so well with instances that it can render scenes that would be over 1 billion triangles otherwise. But the viewport and Eevee aren’t going to get as much of a performance saving, so you would have to either make the instances display as bounding boxes in the viewport or reduce their polygon count.
I will now boost the performance even more by combining instancing with decimation.
No, this isn’t the same image as the previous one. This time, I decimated the model and removed 3/4 of the triangles before instancing. Decimation can be surprisingly severe before the difference even becomes noticeable. This is a destructive workflow, so keep a backup of the original object if in doubt.
And if you are willing to bake the detail to a normal map, you could get away with 95% reduction with barely any visual difference. If you combine instancing with objects that are already baked and optimized, you can build truly huge scenes.
If you’d combine it with vector displacement (which now can be both generated in Blender and rendered in Eevee), it could be used for freeform 3d sculpts as well. Statues, rocks, anything that doesn’t require animated deformations. And the most important, it would provide an automatic LOD system, which no other Blender tool can offer.
(I tried to emulate LOD switching with Geometry Nodes, but came to conclusion that it’ll require coding in the end. As you cannot control when geonodes do update, and updating every frame drops performance instead of helping it)
wow, Thanks for the detailed explanation and with examples! Yes, I know instancing and used it sometimes if I create large scenes in blender.
When it comes to adaptive subdivision getting out of experimental, never say never with Blender development (see the SLIM unwrap algorithm and the sculpt mode overhaul for recent examples), but do not hold your breath.
The Blender Foundation itself is not like a commercial vendor by any means as far as development goes, in that they have no real concept of a roadmap or of development news that is acted upon quickly (which is why they quietly did away with actual roadmap graphics as well as with most development news).
In part, the development fund is (in a few cases) a means of subsidizing development hobbies since they do not instruct the team on what to work on in a top-down manner. Contributing to the fund is in part a gamble that always has a return, but there’s no guarantee it leads to your personal wishlist being fulfilled in the near future. The reason why adaptive micro-displacement is even useful at the moment is because of the needs of a volunteer developer who improved it, so to advocate for a better system of patch review and of keeping volunteers engaged in the project may be what will work in granting your requests.
You mean Mai ? what’s the story ?
Do you have any sources for this or any tangible data to back this claim, or is it really only your interpretation of what’s going on?
I for one am pretty positive they very well do instruct the team on what to work on in a top-down manner.
Do you think all the people working on the Animation-system overhaul just happened to suddenly decide they all wish to do so?
According to his blog, Aras Pranckrevicius didn’t start to work on Blenders VSE because he felt like it, but because he was asked to do so:
How does one accidentally start working on VSE?
[…]
For a spare half-a-day after the conference, you decide to check out Blender HQ. There, Francesco and Sergey, for some reason, ask you whether you’d like to contribute to VSE, and against your better judgement, you say “maybe?”.
And I think it is most unlikely Hans Goudey suddenly just felt like doing major work on the Sculpt module and accordingly spend hardly any more time on the Nodes And Physics module.
Then there’s people like Falk David who was,- to my understanding - specifically hired to work on GreasePencil v3. I assume Huang Weizhen was specifically hired to work on Cycles.
There are probably more examples.
So where do you get it from, this idea they (figuratively) just pile a load of cash onto a table at Blender HQ in front of the devs and tell them: “Well, here’s our budget for the month, grab yourself what you need and go work on whatever you like. See you next month.”?
I guess it’s just something you made up?
greetings, Kologe
Dude, so where do I go to see the roadmap graphics and development news for Adobe, Maxon, Autodesk, etc? That seems to only be part of Open Source projects.No idea when Adobe will make anything new for Photoshop or Illustrator aside from AI integration - it just pops up when they release it. Pre=release of software from those other companies too - nothing prior.
Commercial software development largely takes place behind closed doors though, any roadmaps they have is likely not accessible to the public.
I did not say it was in all cases, but I thought there was a developer who chimed in (whose message is buried deep by now) who stated that the BF leadership does not specifically determine what tasks they are to work on in what timeframe. There is a difference between what modules they usually work in and what todo item is worked on (which is why it can be years between the announcement of a development project or a development tweet and when code is put down). That is why I say to never say never, but to not rely on the item you are looking for to come in quickly if you need it now.
One of those cases where a developer seemingly comes out of nowhere with an amazing, high-quality patch addressing a major shortfall in Blender (Mai was not exactly known by either the BF nor the community).
Then why make that comment about BF roadmap? Just because it makes for better sensational journalism here? You and I enjoy much more access to the inner workings of this project than any commercial entity behind closed doors, so what is the point of complaining about them changing tactic just because some aspect isn’t put on the front burner?
No they didn’t:
Yes they do:
Stop spreading misinformation
Some Blender developers, like me, are not told top down what to do. But I am not a paid developer and think that as a pretty good rule, paid developers are answerable to project and development managers as to what they work on. Characterizing their work as “development hobbies” is a pretty bad mis-characterization.
And even as a totally volunteer developer, I personally at least consult with the staff developers on what I work on, and take input from the user members of the Modeling module (and also comments on this forum, which I read all the time) to help prioritize my work. That said, there are way more worthy, high priority items to work on than people able and willing to do them, so in choosing between them, the choice might seem arbitrary and wrong to some people.
Thank you for what you do, we appreciate it a lot!