the default (open collada) addon VS the "better collada"addon - help gather examples!

sigh

Am I now supposed to come up with a stupid joke analogy about the guy that always talks about doing optimization later, but then releases anyway without ever optimizing at all?

I’ll also point out that the Schlemiels in Blender are so pervasive and ingrained in the basic architecture that fixing them after two decades wouldn’t be quite so easy anymore.

Also, let me say this: string cocatenation - especially in Python - is trivial. It’s not the case that any of the other methods described in the article I linked are harder to get right. Concatenating strings with the “+=” operator may well be idiomatic (and efficient) in other languages, in Python it is not. Doing it in this way despite equally simple and better alternatives is a bad decision from the outset.

There certainly are situations where a simple O(n^2) algorithm is favorable over a more complicated yet more efficient one. This is not the case here, you don’t have to measure performance to know it, just don’t do string concatenation like this in Python and you’ll always be fine.

The problem is not that efficiency is not important, it is, by the mere consideration that you’re spending less to do the same. The problem is that you can’t do it if the program is not feature complete its output is not verified. Again, this is due to the simple fact that to compare efficiency you start with a correct input and output.
In the case of this collada importer, the key questions, at the current stage, shouldn’t be if the thing is as fast as it can be or if the code style is fancy enough, but if it works and if it can be integrated into the blender (add-on) code base.
Once you get that, absolutely go for performance.

For what it’s worth, the comments in the code review aren’t just about performance. They’re also about maintainability. Without a consideration of maintainability, the add-on can’t be considered suitable for inclusion in master. That doesn’t mean it can’t be worked on in a branch (like the one that was set up), but if there’s any expectation that other people (i.e. core Blender developers) maintain the code, it has to meet the same standards as every other included add-on.

I’ve come to believe that these are just things you read somewhere and are now parroting without actually understanding what I am talking about.

When there are two ways to solve a problem which are equally simple (as in this case), arbitrarily choosing the massively less efficient solution would just result in worse code code for absolutely no good reason. If it is a common problem and you went through with this choice throughout the entire project, you would have a lot of completely pointless work to fix the issue at the end. However, chances are that you have no time to fix this at the end of the project and since your crappy solution still works, you just release it.

In the case of this collada importer, the key questions, at the current stage, shouldn’t be if the thing is as fast as it can be or if the code style is fancy enough, but if it works and if it can be integrated into the blender (add-on) code base.
Once you get that, absolutely go for performance.

One of the major reasons why it was refused is the (supposedly) inefficent string concatenations, which would be just bad code quality.

However, as I found out, this (anti)pattern of string concatenation has actually been optimized for certain cases, at least in the Python implementation that Blender is using. I didn’t know about this, even though it’s been like that for quite a while - shame on me!

I have tested three cases:

Case A: similar to how it is done in the Collada script


s = ""
for i in range(100000):
    s += str(i)

Case B: string concatenation is performed on a class member, from the outside


class WrappedString:
    def __init__(self):
        self.s = ""
    def add(self,e):
        self.s += e

s = WrappedString()
for i in range(100000):
    s.add(str(i))

Case C: joining a list of strings, generally recommended


l = []
for i in range(100000):
    l.append(str(i))
s = ''.join(s)

The timings for the CPython implementation (as used in Blender) are:
A: 59ms
B: 3221ms
C: 40ms

As you can see, string concatenation with += in case A is almost as efficient as joining a list, but more importantly it has the same complexity of O(n). However, the optimization fails to kick in when concatenating to a string that is a member of an object, where the complexity is once again O(n^2).

The timings for PyPy, which is supposed to be faster than CPython by virtue of its JIT compilation are quite different:
A: 59339ms
B: 59736ms
C: 18ms

The O(n) optimization found in CPython does not exist and, on top of that, naive string concatenation is over 10x slower. At least case C does show a speedup of about 2x.

Conclusion: The “better” Collada script gets lucky and avoids most Schlemiels through CPython’s optimization. Considering alternative Python implementations, if one is to write portable Python code, it would still be best practice to avoid naive string concatenation. It’s also worth noting that the O(n) issue also exists for the strcat function in C or for Java strings, so it is definitely something programmers should at least know about.

I don’t really need to try to understand what you write, I already know it, just in a better way. What interested me was that your post felt like a fart in the wind: not very much added but a bad smell. I felt compelled to rectify it, so that other participants to the discussion wouldn’t loose focus on the relevant questions. Like an air cleaner, if you want.

Personally, I think the biggest challenge going forward in terms of this addon would be the reconciliation of the different development styles between the Godot developers and the Blender developers.

Each side has already made arguments as to why they think their development styles are superior (even though there are indeed pros to both as seen by the quality of both Blender and Godot). Now Reduz right now has put the development task on pause so he can concentrate on the release of Godot 2.0, but I do hope that afterwards, that he and Campbell can really sit down and work out a plan that would be a good compromise between their styles and their ideas (along with acknowledging the possibility of not having all of their preferences remain intact).

This thread is one of the development problems blender has i think
Instead of betting on multiple horses.
It wants to have everything as a single best solution.
While we dont live in a world where there is a single solution for all.

Why not simple add multiple export options, ea game colada AND normal colada.
Problem solved.

We might also add then previous similar cases like this, ea include alternative cycles engines, things that never came out of beta phase or alternative viewport rendering (for those who liked color wireframes, i didnt but some did).
There is a strong wish to reduce screen clutter, but all serious 3d programs have it, and a few options more wouldnt mind.

Its listed as a blocking issue in the original review (linking to documentation on the topic).

Not when one of there are conditions like this set by the contributing developer:

Not sure if this is related but whenever I try to export a DAE file from Blender to import into Aurasma in a .tar file, the file never works, if I export the same model from Maya it works immediately, no issue, the two DAE files look completely different. Right now I can’t use Blender for something ridiculously as simple as exporting a DAE file.