Talking about "production readiness" of features

I got the ideas exactly from that.

Here is the link:

I will paraphrase in these bullet points:

  1. interaction is key. Blender should continue to have better and better interaction tools.

  2. this will be accomplished by various optimizations but not by increasing Blenders ability to arbitrarily handle large amounts of data.

Full stop. We have been asking and asking and asking for better handling of data. Like forever!

To my point when you are working with high quality assets like are used in “real productions” “large studio” or “ambitious freelancer” and not cartoons… You need to handle large amounts of arbitrary data. Be it to retopo a Zbrush Sculpt or creating a complex landscape or industrial environment.

This is the increase in interaction we are asking for. Because not just for our own needs but because until Blender can, it will not be able to work comfortablely in a professional pipeline with high demands.

This is not an opinion. It is demonstrable fact.

And so a pipeline has to be smooth from one side to the other. This is why professional packages are using “the other method” and try to accomplish this throughout the pipeline.

XSI used Gigacore… Maya not as good. But instead of refactoring they worked out a viewport cache. Works ok… I will have to play wit it more…

Bottom line is it has to be a solution from end to end… Not…

  1. A new Rendering editor designed to handle large amounts of arbitrary data.

Edit: Even the LightWave developers knew this. They developed Hydra which was something that is in Chronosculpt and also started with CORE. Rumor is that it exists in LightWave 2018 and beyond…

It was to be the first step in completely refactoring LightWave. Giving it a modern mesh and data handling engine. There are some demos of it in action over in those old Blogs.

2 Likes

Maybe an unpopular opinion here, but I think that we shouldn’t consider a “production ready” software only by high end productions standards. Blender is used for freelancers and small teams as well, and such people earn they money and have a carreer without ever being able to work on a so called high end industry (by that, I mean: feature animation, big tv series, hollywood films, etc). Instead, probably most people working with 3D rely on advertising and design work… which is fine and the software should, in my opinion, have everyone in mind. Maybe Blender isn’t on “production level readiness”, but I’m pretty sure that the team is trying. And there is life beyond the high end as well.

So, to be honest, I think it is a little aggressive and rather elitist to only consider a work as relevant if it’s done for high end productions.

On the subject of Geometry Nodes itself, personally, I would like the system to had matured a little more. It can and probably will change a fair amount of workflow on a short time frame, which will only cause confusion and make people mad.

5 Likes

I in general share this opinion too. Yet I believe what is discussed here is that Blender should aim higher and target the production level with higher demands that top studios have.
And the only truly reliable way to do that is to have a close collaborations with these studios.
The byproduct for individual artist would be better tools anyway because big studios need to be efficient at all production stages.
But if Blender always aim at the lower less ambitious goals I hardly can see how they can satisfy bigger production needs at a high level without the need of tons of workarounds and optimizations on a studio’s side. Maybe only by accident.
In short if Blender tries to compete at satisfying top production needs I fail to see any downsides for all artists who uses Blender at less demanding workflows.
Blender could possibly form partnerships with such studios where they could probably even donate to the development directly or in form of technologies. I see many ways this can be approached. Probably even individually with each studio.

1 Like

Other Softwares are also used by freelancers and small teams the difference is that they can scale up to bigger projects with ease.

High end productions is the benchmark to push the Software to the highest level, and in that way everyone will benefit the small & the big.

2 Likes

Exactly… If it works for hobbyist, it may not mean it will also handle whatever high end users and studios need. But if it handles the needs of high end users and studios, it’s pretty much guaranteed it will also handle whatever low end users and hobbyists throw at it.

The downside would be that I would not use some tools even though they would perfectly fit my needs. But if they are tagged as “experimental” or something like that I will probably not use them in production (except for micro displacement). I would assume that it is too unstable when actually it is not.

“Production ready” is a probably just a useless term. It should probably be replaced by “stable”, “scalable” and stuff like that.

I don’t disagree. As I said, I’m confident that the team is working to get there. This is what 2.8x and beyond is all about, right?

I understand what you are saying but the huge benefit here is if Blender becomes stable and ready for high end production, it creates huge job opportunities for a lot of Blender users. This is already beginning to happen as Blender is gaining more ground but 20 times better, also saves a lot of money for huge corporations working in the 3d industry, might might even mean higher pay for artists in the industry due to corporations having less production cost.

Better tools and workflow since corporations with the money are actively involved in Blender’s development which will allow for employment of very talented and experienced devs joining in because of how important the software is in highend production which means better job opportunities and higher pay for a lot of people here who can code for Blender.

The benefits are endless. If everything is perfect or near perfection at the top, everything that is below benefits.

Features will need to have long and excessive testing (taking into consideration how much time big studio will need to use new feature to the fullest) and following development hence artists will see slowdown in development - something like USD might be announced in release only this year instead of previous with USD export.
I would like to show message by one of core BF devs that is in task about new OBJ exporter


With “production ready” policy that iterative approach wouldn’t be allowed - instead of i.e. artists getting new faster exporter in 3.1, reporting bugs and getting altered functionality with bugfixes in 3.2, we would wait until 3.2 or even 3.3 (given how fast studios can really dig into new feature) to get “production ready” OBJ exporter - that’s 3-6 months later.

Also worth mentioning that while we discuss as if BF solely responsible for how development is going that’s not entirely true since it’s FOSS.
2.93 got commits from more than 90 people, and while paid BF developers are main contributors and reviewers of code that goes in if the policy is “we will approve your code but even if we don’t see any problem it’ll be in beta for 6-12 months before actual audience will see it instead of testers” that’s not gonna make people excited.

I get what you’re saying. I also love fast development cycle of Blender. Sometimes it’s even hard to keep up with all the changes.

When I am suggesting that Blender should work with big production studios I don’t see it in a way where it should slow down the development necessarily.

As I see it the main benefit for Blender in collaborating with big studios is in getting insight on what is needed for artist and this production houses to solve their day to day challenges. There are a lot of smart people who work at studios and have valuable expirience Blender Foundation can capitalize on. Getting a good idea where the industry is heading to and what tools artists/productions require BF can plan ahead accordingly staying ahead of the curve and providing a tool that can solve these current and future challenges. BF will also serve its users since they will potentially be familiar with these tools and can be employable at these studios.

BF could probably form such collaboration with one or two such studios. Since the introduction of LTS version I think this development could go in separate branches and have longer development cycle to make sure it meets higher production demands.

But this could help BF adjust bigger picture for developing Blender and making it even more desirable in higher end productions.

I see it’s as a win for everyone.

2 Likes

From my point of view, a lot of that is covered by the modules.
https://wiki.blender.org/wiki/Modules

I have been using Blender as as a freelance artist now for 13 years for both client work and my own work. The last 10 years has been full time as a paid freelancer.

But even as a feelancer I found the limitations right away and began looking for other software as soon as I could afford it to replace Blender in my production pipeline.

I first Replaced it with XSI and then Maya.

Blender remained as a moding tool and then as rendering and animation improved it started working its way back in.

The main point is that not everyone who is a freelancer is working on projects where they don’t notice the needs.

The reason is because like it or not the benchmarks are not set by the lowest common denominator.

The benchmarks are set by professional software reaching for the highest goal possible.

2.8 and even 3.0 is a huge leap in the right direction.

But unless they fully get that they need to make the goals higher than modest interaction, Blender will never make it

1 Like

Yeah, I get it, but I think we should let time to tell. Blender pre 2.8 is a thing from the past and the right changes are very new. I’m confident that in more 5 years of steady development, Blender will get there.

2 Likes

Time is not my concern. It is the missing elements of the plan that concern me.

Unfortunately that won’t happen until the developers are forced to support a high end production. Then finally they will say…

“So this is what they were trying to say!”

Now… just to be clear. I have never been trying to say that Blender is not improving fast enough or that the planned changes are not long overdue and welcome.

Here is precisely what I am not hearing from the developers that I want to hear.

Yes, we do have a plan to refactor the data handling. And this is going on in parallel to all of the other projects in development. And the plan is to merge these efforts in the future. This will have a positive effect on all interactivity, in all editors, in all daily editing/animation tasks for small or large productions and also for renders.

What I am hearing that concerns me is this:

We have all of these new projects planned. Oh, data handling, yes of course. But that is just another way to develop. We are not taking that approach. We will make a render editor that can handle lots of data.

What do you mean by data handling? Literally every operation has to do with data handling. Are you talking about something like data oriented design?
And why does it have such a huge impact on everything?

If you have slow importer, it’s not that big of a deal if you have to wait couple more seconds or minutes for something to import, as long as it comes through correctly. It’s not a showstopper.

On the other hand, if you have a fast importer, but the data comes in buggy/broken, or doesn’t come through at all, then you are screwed, if that’s the format client has sent you and you are relying in that part of the data.

Seeing they have the priorities other way around it’s a bit ridiculous :slight_smile:

Slow importer is not a showstopper, broken one is.

(Keep in mind I am talking about importers specifically. I do realize there are other areas where bad performance can be as much of a dealbreaker as lack of stability, such as mesh editing).

Perhaps the best person to ask would be Pablo:

So, development and new technical innovations regarding the asset creation workflow will focus on having the most advanced real time interaction possible instead of handling large amounts of data.

This is almost a contradiction in terms. But ask him what he means by data handling.

There is another way of approaching a software design, which is prioritizing handling any arbitrarily big amount of data. The main selling point of these software is that it can actually render and edit the data, leaving interactivity as a “nice to have” feature when it is technically possible to implement. So, when you are designing for handling any data size, you can’t assume that the tools will just scale in performance to handle the data. Software designed like this will try to make sure that no matter how many textures, video files or polygons you want to edit, you will always be able to do it in some (usually, not interactive) way.

In effect all other professional software has it wrong. But in fact this entire paragraph is wrong.

I have no idea what other software he is talking about.

Maya?

Much better data handling over all. And a large part of what makes Maya so popular especially in the beginning was it interactive nature. In its time it was unparalleled. And this tradition has continued to this day always at the forefront of interactive view port tools.

XSI?

It is just faster all around. Hands down. Gigacore. Viewport performance with large scenes probably still the fastest. 12 years ago handling scenes of 1000s of objects and 10s of millions of polygons.

https://www.qoobee.de/news/artikel_60.html

ICE by the way. Its amazing physics and particle system, with ICE rigging and ICE modeling and fluids and… and…

The name?

Interactive Creative Environment.

This is named because of the interactive nature of the tool.

Over a decade old.

Zbrush?

Hands down the best handling of data and interactive tools for sculpting. Period.

So…

I really don’t know where this idea that he is putting together in this paragraph even comes from. Nothing in the real world actually works that way. It is just an arbitrary idea - made up it seems - to justify not targeting handling large amounts of data.

But why don’t you ask him, if you still have questions. It seems clear to me.

3 Likes

So, Pablo is working on an improved way to handle data. I wasn’t aware of that and Google told me, more information is likely in this article (hope that’s the right one):
https://slacker.ro/2021/06/15/asset-creation-pipeline-design/

Some modes had limitations regarding what could be done. This was solved by refining the way Blender handles data and to make it more flexible.
Thanks to those modifications, the modes can be redefined to make them more powerful. According to my understanding this can both mean to make operations faster, but likely use less memory too. That would be overall positive.

According to my understanding, that’s what you would like to hear or did I get something wrong?

No. He isn’t.

During the past months I’ve been working on the design of what I call the “Asset Creation Pipeline”.

So, development and new technical innovations regarding the asset creation workflow will focus on having the most advanced real time interaction possible instead of handling large amounts of data. This means that performance will still improve (new features will need performance in order to keep the real time interaction working), but the features and code design won’t be targeting handling the highest possible poly count or the largest possible UDIM data set. The focus on performance won’t be on how high the vertex count in Sculpt Mode can be, but how fast Blender can deform a mesh, evaluate a geometry node network on top of it and render it with PBR shading and lighting.

Is this clear now?

1 Like

What you quoted changes nothing regarding my comment. I was already aware of that.

Could you explain what is bad about the planned changes?

1 Like