Blender Edit Mode Performance

The fact that some work is continuing after the sprint ended is an encouraging sign, it means the devs. are learning how valuable it is for users to have high-performance tools for the creation of large scenes. Expect a significant expansion of Blender use in pipelines when 3.0 brings major improvements in editmode, OpenSubDiv, Everything Nodes, Cycles and perhaps even sculpting.

7 Likes

Let’s not pretend the devs are idiots, please.

6 Likes

I am not accusing the devs. of being idiots, but it is a fact that we in the community, for up to 15 years now, have been trying to convince the BF to break out of the old-school FOSS model of organic development that can lead to rapid growth in the feature-set, but can leave said features un-polished, un-optimized, or even un-finished.

That is why the recent developments in the way the Blender project is being handled is good news, because we are seeing more and more instances where that old model has lost its influence (leading to a much better application).

13 Likes

Then, users will complain about the lack of new features.

I personally would rather have few solid features with solid performance, then plenty of features, non of which perform “as intended”.

Just look at any “specialized” software, they do one thing and they do it well, hence why they are adopted in pipelines.

Can’t say the same thing about an app that tries to cater to way too many fields without performing well in either of them (both in terms of features and performance) because the whole core devs + community efforts are scattered all over the place trying to push new features left and right that “might” be useful to someone at some point in time (or not), increasing the technical dept and maintenance cost with an ever increasing backlog of bug reports.

How many times a new feature gets added prematurely (hey we need new feature asap !!) collect a pile of related bug reports, gets scrapped and rewritten from scratch in less then a year of it’s existence.

1 Like

Let me finish that sentence for you. Enormous success. That is what you meant to say right?

The blender foundation has never had more support and more resources then it has now in its entire history. They must have done something right to get here.

Sure you can look back and say, if only this or only that, but that is rather cheap is it not? Nobody starts of doing things “perfect”. Its called growth. And in the beginning, when resources are sparse, you might have to work very differently compared to later on, when more resources become available to you.

We are still relatively fresh of from a heavy refactor. It is pretty insane for an open source project to pull that of imo, especially one as complex as Blender. They always said they would optimize when the time came around for it. Now they are doing it with great success. In a way, the fact they can optimize Blender to the degree they are could be seen as a testament the foundation they worked so hard on is sound.

I am as happy as everyone to see performance receiving some love. I am also following the OSD thread and the improvements are marvelous! I haven’t posted my latest experiences yet but I can move rigs around with subd on that I never could before. Its like butter and Kevin is still working on it.

Direct link to latest subdiv build (as of this post):
https://builder.blender.org/download/experimental/blender-3.0.0-alpha+subdivision_work.ae100c8957cd-windows.amd64-release.zip

Look my point is that its easy to go ooh if only they had done this or that before but its a little cheap and easy. Just be happy they are focusing on this area now (as they promised they would btw). There is no need for extreme conclusions about them dissing the oh so faulty and bad “old-school FOSS model of organic development”. You really are undervaluing everything they have achieved so far.

Its a good thing they reevaluate how they work and develop, but a lot is simply external factors that allow them to move this way. The way you paint it, where they stop being stubborn and finally come to their senses, really isn’t justified.

4 Likes

I’m quoting this statement mainly for the last part “and perhaps even sculpting”.

I can’t remember which version of Blender was the last time I tried it, but I once tried to subdivide a whole human mesh to the point that I’d be able to sculpt-in fine wrinkles and other skin detail. The idea being that I could then bake it to a normal map. I used to be able to do that when I owned ZBrush, and it’s the only aspect that makes me regret selling it, since last time I tried, I could not get to that sort of subdivision level in Blender, well, not without bringing it to its knees I couldn’t.

So I’m completely with you on that one, especially as ZBrush is used in so many pipelines and the last time I tried, Blender could not match that performance. For me, this made Blender pretty much useless in that regard (unless working on smaller less complicated objects I mean). So I welcome the day when I’m able to sculpt detail in Blender with the same silky-smooth response I used to get from ZBrush.

The interface and workflow of ZBrush used to drive me insane (which is why I sold it), but there is no denying its ability, and I can’t wait for that sort of fluidity to reach Blender. Still using the latest LTS here btw, so no idea how it compares to Blender 3.

But yeah, I do believe that when it comes, sculpting performance improvements are going to be one of the most influential moves when it comes to users’ pipelines. Suddenly being able to handle the sort of resolution that normally only ZBrush can handle, will cause quite a stir I think!

What helped Blender survive and thrive in the first 12 years…

  • The fact that it was FOSS and GPL attracted a group of passionate developers, many were volunteers, others saw it as something worth doing despite the pay being below market rates. An app. with a small userbase and little presence in pipelines can go on indefinitely as long as you have funding for at least one developer.
  • Massive stagnation on the higher end (along with stricter EULA’s and price increases) made it worth it for many people to learn ‘the Blender way’ and deal with the many quirks that defined the app. over the years.
  • Major stagnation on the lower end as well (Bryce and Carrara). Had the commercial R&D engine remained at full bore, Blender would not have been able to keep up. A look at the commit logs can explain better, as Blender’s current development rate is comparable to a commercial vendor with a healthy R&D streak (where one month of development is equivalent to the first 4 to 5 years of Blender as an open source project).
  • More focused improvements just when it was needed, focused projects like what we are seeing now were a bit less common, but when they did occur they came at a time when Blender really needed it. When the focus on Cycles and Bmesh came in for instance, Blender was losing its ability to compete because of an old render engine riddled with limitations and a modeling system that could not have advanced tools due to its aging data structure. I do admit that Modo would’ve been pretty alluring to me if neither of those projects succeeded (as I was hitting the limits on what Blender could do visually).
  • The pricetag being free, which allowed many who would not have access to CG technology to create with it (as even the cheapest 3D solutions with decent tools were a few hundred dollars at least).

If you look at the historical data, one can argue that the real watershed moment was the 2.8 push (which among other things sought to take on and resolve a long list of longtime quirks). This is evident in the dev fund skyrocketing from less than 50,000 Euros annually a few years ago to over a million today. Progress in professional circles technically ticked up a bit starting in the 2.6x series, but that pales compared to the progress today.

A bit of a history lesson there. :slight_smile: I remember all of that. Including helping to test bmesh. We take a lot for granted nowadays. I still remember playing with modelling only apps like Nendo and Wings3d. Pretty crazy to think about.

I agree there was a watershed moment with 2.8 with the streamlining of the interface.

I think you make valid points that attributed to Blender’s success. My point is that, even with those taken into account, I feel the way you portray development up till now (or up till recently whatever you prefer) as poorly as you did is unwarranted.

Its as simple as that. We can argue about the fine details of it but I really don’t want to since it will just clutter the thread and not add anything of value. I can understand where your coming from but I don’t agree with your conclusions nor the certainty with which you express them. :man_shrugging:

I do agree that its great they are focusing on performance now because it was necessary and it will benefit a lot of users (everyone really).

1 Like

I don’t think people on the internet who just read big bullet points and watch 1-2 min YouTube video and draw their conclusion would agree with you. They also think bug fix or solid performance should just happen.

To operate as something other than this requires a lot more money and full time employees. I don’t get why people seem to be ignorant of how small and inconsistent the history of blender development is. I only started communicating with Blender users in 2019 and I have an understanding of this. Even with the recent influx of cash ( not really all that much compared to the cash invested over the lifetime of some other software) it will still take time to hire more of the right people for the jobs that need to be done. Then it will take time for them to understand the code. Then it will take time to come up with idea. Then more time to halfway impelement them, and more time to discuss the likelyhood of that approach being a good solution then time to find the bugs and get more community feedback and recode and retest some things and preferably make automated tests for some things and then add more configuration options… things take time and Blender does not have the history of workforce and buget that other programs have enjoyed for decades.

4 Likes

I don’t actually think that will ever happen unless someone illegally reverse engineers the proprietary albino newborn blood magic powering zbrush.

Its a great example of completely unrealistic expectations really. Zbrush revolutionized digital sculpting and its their specialty. Blender will never beat it at its own game.

What blender can, and has been doing, is improving its digital sculpting toolset over time. This combined with hardware getting ever more powerful has left us, imo, with a very capable sculpting toolset. I don’t see Blender ever surpassing Zbrush in rough point pushing potential though.

1 Like

Yeah, it’s true there’s some seriously clever stuff going on with ZBrush, I always thought so.
That said, never underestimate the skills and determination of the Blender devs :wink:

Most likely they would have to leave some current task half-baked and ignore very important bug reports to go focus on inventing new tech that can beat zbrush. That would just further piss off a lot of already annoyed an impatient and unreasonable people. It will still be less than half as performant as zbrush and a lot of people will ask why any time was wasted on it at all when there are so many other things so many people consider more important. Many people will assume the full dev team was ignoring the rest of blender and working on sculpting even though it was actually a single dev who isn’t being paid by BF. Not being able to outperform zbrush would be blamed on Pablo Dobarro’s preference for stylized artwork somehow.

6 Likes

I think it already have mentioned. ZBrush doesn’t use true 3d data. That’s why it is fast.
It only works when it is a standalone program in own world alone.
You can’t make a traditional DCC like Blender do same thing because all of them are based on 3d and triangle mesh data.

http://docs.pixologic.com/getting-started/basic-concepts/the-pixol/

2 Likes

That “Pixol” thing only refers to doing stuff in document mode - that silly gimmick mode that newbies trip over. All the ‘brushes’ are most certainly your plain old 3d mesh data.

5 Likes

Yup, I think that Pixol thing has only to do with the special mode ZBrush uses, I don’t think it applies to an actual mesh. Even if Blender never reaches being able to handle the same level of subdivision as ZBrush, it still does evolve in performance over time.

I often hear Pablo enthusing about performance increases in the Blender Today broadcasts, so there’s clearly a constant push for performance increases right across the program.

One of the recent performance improvements has been reverted because it caused bugs and crashes. So if you see edit mode performance go down a bit… that’s why.

From the reverting commit linked above:

Changing the dependency graph is a can of worms and the result is a kind of unpredictable.

A different solution will be planned.

This is what I worried. It seems not much QC is going on after code changes. Testing on a handful file would not be enough.