Hi, i have listened to this blenderpod cast… and Campell said that to him blender was getting worse… and that he didnt like the direction blender was taking ?
Does anyboady knows what he means ? because i though blender was getting cooler, cooler interface, more interactive, cycles, camera tracking …
I don’t have an exact quote, but he wasn’t talking about Blender simply “becoming worse”, rather about how it’s getting harder to manage and keep stable now that it’s so big.
What does “Being worse” mean in software?
Taking more memory, being slower, taking more space, being cramped with buttons, having bad unintuitive controls, to much code, hard to debug etc etc etc…
As I imagine, every software that expands the way blender does is getting worse in those aerias, but get more features.
I don’ t know if it’s a good idea, but exist the possibility to break Blender code in a suite of different programs, as the adb design suite . 3D , Render, 2D/3D Track, Secuencer, Compo nodes…
With a common libs (Opengl, RNA,DNA, etc …)
Would it be easier to maintain the code?
Well, I think that breaking blender into different programs is not a good idea. Using 10 programs especially when doing game development is a nightmare. I know it is still harder to manage blenders user interface (from developers point of view/ but having everything in one place is the best option. Later, if blender will have even more tools in it, the menus could be divided into groups, for example, animations, graphics etc
Looking at the source, the little I know of C it seems extremely well structured. code wise, both python and C could have guidelines there is this MVC design pattern written at the beginning of 2.5 by Ton, a pdf with the flowchart how events are passed etc.
but I guess like having as many 3d tutorials, blender could have coding tutorials.
Splitting is a bad idea and Creative Suite is a really bad example of how to do it. The Adobe Suite was supposed to get a unified/shared GUI (AppInterface) based on a non-overlapping workflow for Premiere, it hasn’t and 5 years down the line the separate programmes still have corresponding menu entries in different places, short cuts still not unified, cannot do calculations [297/4] in Photoshop variable boxes, Illustrator finally got a Place command but no shortcut, the UIs are different shades of grey, the fonts look slightly different in each programme etc. I downloaded the latest Nvidia drivers, which, joy oh joy, crashes if I have a browser open and swop between the various open programmes. I have been told I HAVE to upgrade to CS 5.5 if I want this to go away. The pre-loaded Design Suite (INDD, PS, AI, Bridge) + Thunderbird and Firefox takes up 4.31 GB in memory, with no data loaded, on Windows 7. WTF? Only Photoshop is full 64-bit and I need the 32-bit version if I need to aquire scans. The situation is slightly better on Mac, if you avoid the AppInterface, and use the old GUI.
We need some context here. Please specify where Campbell mention this (in which podcast and at which time)? Otherwise all you’ll get is groundless speculations…
I would think it is all about people have reached the limits of C programming language. It would be a good time to think about change in programming paradigms.
I think they’re talking purely from a coding point of view.
From a user point of view, with a few exceptions Blender has never been better.
It still loads extremely quickly, and if you’re not digging into the outliner with a multi-million polygon mesh, then it’s still very quick and responsive. Of course it has rough edges, but it’s a work in progress.
In general, the larger the code-base, the harder it will be to maintain and add new features without introducing bugs unless you were somehow able to maintain some superb coding paradigm that is enforced with every new line and every patch (which in a large program like Blender that has many developers may be nearly impossible to do without slowing everything to a crawl).
As I’ve read in CGTalk, Autodesk apparently has similar issues with their apps. due to the sheer amount of functionality they contain, and the same might be said for Adobe as well in some cases.
Anyway, the issue with large patches introducing new bugs and stability issues is one of the things the code review tool is supposed to help address (which Brecht has been looking for people to make increased use of it)
I’ve touched on this somewhat related subject before. Every time a new feature is introduced by a volunteer coder the large portion of the work eventually falls on the people who are manning the ship. Most of the times it’s not an easy job. That’s why the solution is to stop introducing new features until the main features are stabilized.
It’s a choice between few features but stable and tens of half-baked features with loads of instability.
Not to be so gloomy, I think its getting better at the same time, a lot of interesting projects are being worked on and bugs fixes too. but I do lament the extra complexity and difficult getting into some areas of blender source these days.
2st (back to my original comment about it getting crappier),
Yes, code becoming more complicated and growing faster then our core team is a worry.
Adding gsoc projects with many students not maintaining their work.
3nd. No - this is not only from a code point-of-view, there are real issues/slowdowns with new code.
However many of them, in practice end up not being really much of a problem.
… to rattle off a few.
2.5x animation system does many many strings lookups on properties, where 2.4x used a fast switch statement (we could cache lookups but so far not).
Moving vertices from python is ~5x slower then it was in 2.4x, IIRC. because of update functions being called after each.
2.59 drawing a single button gives over 500 glVertex2v calls (32 verts per button, drawn 8-10 times each with jitter to get fake AA effect)
compared to 2.4x there are more loops over objects with the animation system, particle system - loop over all objects to check what keyframes to draw in the timeline for eg - in 2.4x we kept these kinds of loops to a minimum.
2.5x rna api does many string lookups while drawing the UI.
python3.x has all strings unicode, but not blender, so it you want to draw a sting in blenders UI you can get 1-2 conversions per draw:
utf8 (blender/C) -> unicode (py var in your script) --> utf8 (blender/C which draws).
For static strings python caches the unicode -> utf8 so its not all that bad.
Undo being used for operator redo - this basically saves and loads the file each time in memory - crazy! - but hey, it works mostly
UI layout being written in python is of course far slower then C.
some libs - mainly collada are huge, blender still can compile into <10mb, 3mb if you disable almost everything, but for users who don’t build we’re getting bloated because of extra libs.
ok. ok… think this is enough examples, nevertheless these issues are generally NOT bottlenecks, but they do accumulate to make blender slower.
Regarding this being a problem with C, and needing a new language,
eh, don’t think so really, perhaps moving from C to C++ but limit ourselves to certain features of C++ would help us, but after working on some of blenders existing C++ code, I’m not convinced.
So!
Hope this answers you’re question, not trying to be gloomy or down on blender, just notice blender doing stupid stuff sometimes and get grumpy! — but its good motivation to work on improvements too
ok. ok… think this is enough examples, nevertheless these issues are generally NOT bottlenecks, but they do accumulate to make blender slower.
So are there any plans to tackle these code issues one by one to remove its contribution to slowing down Blender? Getting these taken care of could help some users move their work to 2.6x from 2.49 (it’d surely help address the concerns about speed from those either with very complex scenes or a low-end PC)
They may not be ‘bottlenecks’, but getting better performance out of Blender in a way that doesn’t involve the user upgrading his hardware is always good.