To operate as something other than this requires a lot more money and full time employees. I don’t get why people seem to be ignorant of how small and inconsistent the history of blender development is. I only started communicating with Blender users in 2019 and I have an understanding of this. Even with the recent influx of cash ( not really all that much compared to the cash invested over the lifetime of some other software) it will still take time to hire more of the right people for the jobs that need to be done. Then it will take time for them to understand the code. Then it will take time to come up with idea. Then more time to halfway impelement them, and more time to discuss the likelyhood of that approach being a good solution then time to find the bugs and get more community feedback and recode and retest some things and preferably make automated tests for some things and then add more configuration options… things take time and Blender does not have the history of workforce and buget that other programs have enjoyed for decades.
I don’t actually think that will ever happen unless someone illegally reverse engineers the proprietary albino newborn blood magic powering zbrush.
Its a great example of completely unrealistic expectations really. Zbrush revolutionized digital sculpting and its their specialty. Blender will never beat it at its own game.
What blender can, and has been doing, is improving its digital sculpting toolset over time. This combined with hardware getting ever more powerful has left us, imo, with a very capable sculpting toolset. I don’t see Blender ever surpassing Zbrush in rough point pushing potential though.
Yeah, it’s true there’s some seriously clever stuff going on with ZBrush, I always thought so.
That said, never underestimate the skills and determination of the Blender devs
Most likely they would have to leave some current task half-baked and ignore very important bug reports to go focus on inventing new tech that can beat zbrush. That would just further piss off a lot of already annoyed an impatient and unreasonable people. It will still be less than half as performant as zbrush and a lot of people will ask why any time was wasted on it at all when there are so many other things so many people consider more important. Many people will assume the full dev team was ignoring the rest of blender and working on sculpting even though it was actually a single dev who isn’t being paid by BF. Not being able to outperform zbrush would be blamed on Pablo Dobarro’s preference for stylized artwork somehow.
I think it already have mentioned. ZBrush doesn’t use true 3d data. That’s why it is fast.
It only works when it is a standalone program in own world alone.
You can’t make a traditional DCC like Blender do same thing because all of them are based on 3d and triangle mesh data.
That “Pixol” thing only refers to doing stuff in document mode - that silly gimmick mode that newbies trip over. All the ‘brushes’ are most certainly your plain old 3d mesh data.
Yup, I think that Pixol thing has only to do with the special mode ZBrush uses, I don’t think it applies to an actual mesh. Even if Blender never reaches being able to handle the same level of subdivision as ZBrush, it still does evolve in performance over time.
I often hear Pablo enthusing about performance increases in the Blender Today broadcasts, so there’s clearly a constant push for performance increases right across the program.
One of the recent performance improvements has been reverted because it caused bugs and crashes. So if you see edit mode performance go down a bit… that’s why.
From the reverting commit linked above:
Changing the dependency graph is a can of worms and the result is a kind of unpredictable.
A different solution will be planned.
This is what I worried. It seems not much QC is going on after code changes. Testing on a handful file would not be enough.
So them catching a bug in an alpha version that is released to be tested by the public now reads as a failure?
Interesting chain of logic being applied here.
The 2.8 alpha being a year and a half long programmed people to treat the alphas as if they were actual releases causing reactions like this and giving them an unrealistic expectation that addon authors always be current with software that is constantly changing.
Also, there is a constant comparison to commercial software. If someone is using a compiled program, they are used to it being largely “done”.
Very few people have experience with beta testing software, let alone being on the bleeding edge of alpha code.
Not to mention, the usual alpha testing occurs behind closed doors of the commercial firms. They only allow a select few to do this testing. Only recently have Beta’s become more publicly accessible, though participation usually requires one to sign a non-disclosure agreement, while discussion takes place on closed forums.
Where did I said failure? What an interesting chain of logic.
I simply stated dev need test more to catch more when they are in alpha.
It is not only the job of the devs to catch those bugs. It’s yours too. Since Blender is an open source initiative user involvement is implied.
You should not be worried about anything, instead test the alpha version and report a bug if you find one.
Name one application that manages to stay rock stable on a consistent basis even in the alpha state, to answer this is harder than it seems since commercial vendors often don’t give out nightly builds to begin with (even if they did, they tend to make you sign an NDA). In my experience, Blender’s dev. builds are surprisingly stable, the builds of other applications often have serious issues because bugfixing is not a major priority until later in the cycle.
Besides that, Germano has written a new patch that does the same thing, but with a different approach and without the crashing issue.
Couple of points.
- Testing in Blender is lacking. That is not only my opinion. Read here to know more about (mostly) state of testing from Campbell perspective:
Developing and utilizing various tools for testing is just better investment than depending on users to catch and report all bugs.
- I don’t think success of Blender depends on high-poly performance that much as people think it is. Despite some major regressions in performance for 2.80 it’s still most successful release so far (I doubt even 3.0 will change that). Rich feature set and interactivity are more important.
- Even taking previous point into consideration, it’s still perfectly fine for users to request higher performance, telling how it affects their work and pointing out weak performance in various places in Blender. That’s called feedback, and devs should consider it when thinking about what to do next. There are a lot of other things they consider (available time, dev preference, priority with other projects etc.) so users should not expect that their requests will be implemented in near future.
Also it’s worth noting that exactly because of feedback like that edit mode performance project started.
- Users can wish that Blender will be as good as any other specialized software but expecting it will lead only to frustration. Chances that Blender will be able to rival in performance ZBrush is very slim, and anything but complete success will be seen as failure. Instead look at what Blender can do better than ZBrush, what features Blender can develop that will not be possible with ZBrush at all and look how you can utilize that to your advantage. You can change “ZBrush” to any other specialized 3D software. Blender blends it’s tools and features or something like that
I highly recommend to watch this video. While it’s little bit old (compared to Blender developing speed) most points made still stand.
How are they going to provide smooth interactivity without the underlying performance optimizations ? (unless you prefer to interact @ ~1 fps.
Also, not to repeat myself, rich feature set just for the sake of adding more features is (in my opinion) the wrong way to allocate the scarce resources the dev team have.
Right now there are less then 30 “official” blender devs, adding all kind of features:
- Animation / Rigging
- Texture painting
- Rendering (both a path tracer and a rasterizer)
- 2D workflow
- Motion tracking
- Video editing
- Game engine (removed with 2.80 at least)
And people are still asking for blender to add more (ex: be a CAD software).
In retrospect to any “specialized” software, they have way more devs at their disposal, yet they only provide the core features of a 3d package, and then focus all their effort on that one particular area that sets them apart, reducing the maintenance cost and improving their signature workflow.
It’s a tough question that I can’t answer with certainty. Perhaps first step should be developing tools with interactivity in mind and only after that optimize. For example Eevee 2.0 are developing with that in mind - core motivaiton is “possibility to output all render passes efficiently” specifically for real-time viewport compositor. In this case performance is not a goal but a stepping stone, if it wasn’t necessary for that goal it could be tossed aside for later.
I don’t quite see where that comes from. Feature exists for certain tasks to allow more creativity. If you look at some features in active/inactive development - Geometry nodes, Asset Browser, Line Art, Grease Pencil, VR - those are not “feature for the sake of feature”. Even Pablo when sharing his latest tools nowadays shows practical examples how they could be used - the fact that some people can’t use them for their work(-flow) is not an argument to not develop them.
Yes, that’s exactly why developing tools (features) that blend with each other is the way to go for Blender. You can’t focus on one area since competitors will outperform you. Instead you make synergy with all possible areas. There are no alternatives/competitors here.