Talking about "production readiness" of features

It’s something that runs counter to what has been considered “best practices” for decades, for example object oriented programming.

A lot of Blender’s code is evidently not data oriented. Out of the parts I’m familiar with (I haven’t read much of geometry nodes or Eevee code, so I can’t speak to those for example), Cycles is the one that’s following data oriented principles the most. On the kernel side, that is. The host/API side is OOP.

1 Like

OK, so can we unpack that for us commoners?

I mean OK, so we know something has to be done.

What exactly needs to be done and what is the problem?

I have a suggestion that might be unusual (and I admittedly don’t know shit about coding) but I’d like to hear the opinions of people who know more about writing code and professional CG production.
The way i see it, the easiest solution to do big productions is to create a separate Blender that is similar to the software Clarisse or Katana.
Basically it would be a Blender without ANY tools that are used for creating assets as well as no animation tools whatsoever. Modelling, Unwrapping, sculpting, texture painting, animation tools - all of it can be removed.
What is needed is an (USD and node based) scene description and everything that has to do with shading, lighting and rendering. It is essentially a bare-bone version of Blender that is only concerned about these 3 disciplines (4 actually, since the compositor has to stay too and could play a large role).
Asset would come in via USD, alembic and OpenVDB (everything cached) but for static assets formats like blend, obj or fbx could also be supported.

The advantages are clear as well as the disadvantages.
USD is able to do deferred loading and it is super fast in loading tons of data. The ability to control everything in the scene via nodes makes shot editing and overwrites easy and non-destructive. With USD at its core, teams can work on gigantic scenes at the same time and updates and changes can be pushed through the network in quasi real-time.
USD is open source and done by Pixar with many other big names in the industry using and pushing it.
Implementing it seems to me the main work, everything else should be rather trivial. (I really have no idea, somebody correct me!)

The downsides: Its a ton of work, and a lot of extra stress maintaining 2 programs instead of one. Blender again would have to compete with big companies in the 3D industry that put a lot of money and resources into their products.

There is a ton more to say about this, but I am in a hurry, I just want to put this idea out.
Lets hear you opinions.

1 Like

Would something like that be a lot faster than a dedicated mode?
I mean there would be some overhead but would it be that much?

I suppose weather or not “Production Ready” really means anything may or may not be important but I can tell you that while many parts of Blender can do good things reliably, at least some cannot. For instance, the rigid body world feature and the particle systems work well until you start pushing the numbers/accuracy higher, then errors start happening. I recently had trouble with the particle system in 2.93, in which Cycles was unable to initialize a render. Fortuantely, 3.0 beta was able to render without an issue in that scene, so I use that. But the point remains that Blender is great and can most defiantely be relied upon to produce films and games, but probably not AAA ones.

Stop putting the cart before the horse. You’re all naming cures and not the disease. Just asking for a rewrite from scratch or using some special APIs or development approach is premature.

It’s a simple thing: What is slow? Where is the slowdown, what part of the code? Why is it slow - is it compute bound, memory bound, a poor algorithm?

Only once you identify the bottleneck you can reasonably look for a solution. Otherwise, you might end up optimising something that wasn’t the problem to begin with (and waste hours of work) or throw out the baby with the bathwater, end up with a brand new version that is fast, but has only half the features and crashes all the time.

It’s only engineering, not voodoo.

8 Likes

Gaffer?

3 Likes

Doesn’t need to be a separate app. Take a look at Houdini Solaris for example.

2 Likes

That depends on the dedicated mode - if it works with deferred loading then It would give similar performance, I guess.

You mean the overhead of the rest of the program in comparison with a separated tool without everything?
Good question - I have no idea, If it is done well I don’t assume that there will be a overhead.

I am rather concerned with manageability - having these separated is probably MUCH easier on the devs - they wouldn’t have to compromise in case of conflicting interests with core Blender, as well as for the user, who can concentrate on what is needed in their workflow.

Pretty much, but easier to use and more mainstream, with more renderer support. Last time I checked Gaffer didn’t had OpenVDB support and to be quite honest, the program is too hard for my silly brain.

Gaffer has had OpenVDB support since v0.42.0.0. :slight_smile: Personally I love Gaffer a lot. It can be hard to use because there are a LOT of ways to use it, but it’s really powerful on larger productions where you need to templatize shaders or workflows.

3 Likes

Ok so who is working on this?

Anyone?

And how long do you think it would take to sort out where and how to develop so that we can manipulate and render large polygon counts?

Without breaking everything else?

Is anyone in the process of sorting any of this out?

Depends on what “this” is. There are countless ways of modifying geometry in Blender, there are three render engines, are we talking about one object with 100M triangles or 1M objects with 100 triangles each. Plus various options on top.

One example: take a single, say, 20M polygon object, put a diffuse texture on it, render in Cycles. Easy. Now add a tangent space normal map. Press render. Yes. That slow. (Try for yourself.)
I know what causes this, and I can assure you, you do not need to rewrite Blender to fix it. Nor will fixing this performance problem fix any of the other bottlenecks.

2 Likes

Ok.

Polygons and objects.

Both scenarios. Let’s stick to that. Insane numbers of single objects with low poly count and a small number of objects with insane poly counts.

Object mode mostly for a point of discussion.

Then now, again. Who is working on improving this?

Is anyone working on this?

I’m curious, are there situations were object orientated programing is prefered over data orientated programming for performance reasons (based on modern coding practise)

No offense, but what you’re asking is like telling your doctor “it hurts when I do this, what are you going to do about it?” on the phone. There are countless things that it could be, and I have no idea which one you’re talking about.

1 Like

Well, leaving aside all good programming practices and advise, ignoring code maintainability, etc and taking runtime performance as the only criteria, then I can’t - in this short amount of time - think of a scenario where OOP code would have a significant performance advantage over DOP.

It would be wrong to dogmatically make DOP the one true way and to categorically ban all other approaches from the code base. It’s not like one concept would supersede the other and completely replace it. They’re concepts, they’re tools, they’re ways of approaching a problem, and work best when they’re used in the right place.

What is the right place?

The answer is the one I gave earlier: don’t put the cart before the horse. Don’t let DOP be your hammer that makes every problem look like a nail.

Describe the problem in detail. Narrow it down. Identify the source of it. Determine the bottleneck. Then, and no earlier, is the time look at your toolbox and think about what tool can help you solve that specific problem you have in front of you.

You can of course do all the examination first on a whole class of issues, even a whole portion of the program. If they all happen to show the same bottleneck, then it could be the time to take your tool and apply it broadly to an entire section of the code base.

If you’re reading a common theme, then I hope it is this: A lot of the work does not involve writing code, but reading and analysing code. Parts of that don’t require you to be good at writing fast code, some doesn’t even require you to read or write code at all. And what can be very hard and long work may end up looking like just a tiny commit in the code base.

5 Likes

I am not asking for your diagnosis. Nevermind how long it will take.

Again, is anyone working on any part of all of or any aspect of or taking this one subject on of…

Polygon and Object performance in Blender?

It really is a simple question.

Aside from Cycles-X, which is a many-month effort by several developers to improve performance?
Look here: https://developer.blender.org/project/view/103/

Other than this tag (which is probably not being used as often as it should be), you can look at every performance related issue on developer.blender.org and see who it is assigned to and whether or not that issue has seen activity.

And of course, if you’re experiencing an issue that does not have a ticket yet, file one and make sure to respond timely and in detail to any questions for clarification that may come back. That is the best way to improve the situation.

1 Like

In short, the answer to my question is no.

Thank you.

I had a quick google before i posted so i assumed (maybe incorrectly) that DOP is harder (takes more time) to understand, hence i was more interested in the performance aspect.

So just to check i understand your explaination of “fixing” code, lets say the extrude tool is slow. You find what portion of code controls the extrude tool and maybe it is just an inefficient code loop/path, so you fix/tweak that. Maybe it would benefit from DOP code or something else, so you implement that. Or it could be that actually the code is acceptable and you need to step up a level and find any issues there.

Is that a correct interpretation?