Any issues with the optimized builds?

Quite frankly I hesitate to use them on a regular basis but if they are prooven to be as reliable as the official build it would be stupid to spend 30-40% more time in rendering than necessary.
My experience with http://www.graphicall.org/builds/builds/showbuild.php?action=show&id=380 has been flawless in 4-5 hours of use.

What’s your experience?

Jean

i’ve using that build since it came out (first comment :D) and havn’t seen any problems with it. The file it saved is identical to a regular 2.43, and the render output is also the same. Everything seems stable to me. I also can’t figure out why the fundation doesn’t release optimized builds.
Eugene’s build also includes ffmpeg support, which is really nice :smiley:

That is good news and then better news.
I’ve been using it myself to test answers to problems, mine and others, that I knew I wouldn’t keep and it performed flawlessly.
As I said earlier, the Foundation would distribute optimized build if a reliable maintainer would come forward. Apparently reliability is not an issue anymore; then there’s the ‘forward’ thing and I guess that the comfort of the ‘at your own risk’ GraphicAll build is hard to let go of. :slight_smile:

Thanks for the feedback.

Jean

One thing to keep in mind: The different builds will result in different renders. As an example, one build will have a render that has random items like procedural materials placed in one place and another build will place those random items elsewhere. The reason being the way the CPU processes the numerical data. With an optimized build the numbers are crunched in a separate pathway. However, it seems if you use the same build all the time, including optimized builds, then the resulting renders are very consistent. For a render farm you’ll want to use the same version of blender on all the machines. Mixing the various builds on various machines will result in inconsistencies in colors, patterns, etc.

Well stability somewhat depends on the type of optimization options enabled, there are safe ones and less safe, potentially dangerous ones (program stability/numerical stability wise).

Although using e.g. SSE2 instead of x87 already affects the exact outcome of float operations (due to the rather non-standard 80bit precision of the x87 register stack), a SSE2 build should actually be closer to other platforms like PPC, SPARC or also x86_64 (once 64bit is finally officially supported). Correct algorithms should still have correct outcome, although not identical to the last bit.

Also allowing the compiler to optimize for modern CPU pipelines and/or allow them to use newer x86 instructions than i386 had is usually safe.

But there are also optimizations that take shortcuts that affect float precision or error signalling, integer conversion etc. because certain things cost a lot of time, like handling denormalized floats or switching rounding modes, so it makes programs faster when rounding differently, flushing denormals to zero etc. which is likely to break algorithms that specifically depend on IEEE float behaviour, like scientific computation often does…so e.g. be carefull with gcc build using -ffast-math.

As said, for render farms the exact same platform+build is important, it seems certain types of procedural textures are notorious for producing noticeably different results when float precision changes…

There are also optimizations that affect non-float operations that affects “imperfect” source code, e.g. assuming all aliasing rules are respected causes gcc to emit several warning (that in my experience can very well lead to invalid code since gcc 3.4) when you remove the -fno-strict-aliasing from the compiler flags…

And last but not least, every compiler has a couple of bugs, which sometimes cause wrong code on certain optimizations, for example my kd-tree for yafray goes nuts with MSVC 7.1 when using /Ox, which turned out to be a well known inlining-bug that was never fixed until MSVC 8…