Ubuntu beats OSX 10.6, 3D & system benchmarks.

Since people are taking about speedups in the new OSX 10.6, heres a plug for ubuntu.
http://www.phoronix.com/scan.php?page=article&item=ubuntu_karmic_leopard&num=1

Hows this relevant to blender you ask?

  • At last someone benchmarks with NVidia /w their drivers - which is what youll use if you are serious about blender/3D on linux.
  • Tests system performance across different areas, blender is a bit of a beast so I’d expect blender would perform better also, But a pity blender isn’t tested, who uses sunflow anyway?

With 64bit OSX support nowhere in sight and Durian using all Linux system maybe its time to… (heres where the OSX porting devs can show up!)

…OK, this isn’t totally blender news but when people start threads when their GL driver hiccups. Im not so worried.

Yeah it’s nice to see Ubuntu (and Linux in general) performing so well, but somehow I’m not so surprised :wink:

About PTS is there some way to include Blender? There’s the “benchmark” under Help->System right now, but don’t know how useful that is… If a “real” standard Blender benchmark exist or could be made, so it could be added to the suite, I think it would bring a lot of good stuff with it. Primarily exposure but also so ATI and Intel can be made aware where their drivers lacks with regard to Blender. Nvidia might have a use for it too, but most likely so they can boast with their superior Linux performance :stuck_out_tongue:

Maybe I should migrate to ubuntu when I start to study soon. The games in windows is killing me (Steam games). With ubuntu I was at least trying to learn something.

re: ATI/AMD/Intel, Sometimes I must sound like an NVidia fanboy, but really its only that Intel and ATI are so far behind when it comes to OpenGL for anything other then games.
There are parts of the OpenGL API that blender uses but hardly any games do. (selection buffer, 2D texture drawing, front-buffer).
This sucks because it means though we have a nice api, we cant be sure everyones hardware will support it.
Or, more often its supported but not hardware accelerated (the driver does it in software), which still gives a poor user experience.

Apparently (from AMD/ATI guys at siggraph last year), They are not even interested in proper openGL support for radeon’s, these are gaming cards. For OpenGL support they push their expensive FireGL line.
I even heard from a graphics developer I worked with that ATI disable features on purpose (in this case on purpose with a driver update), for the radeon cards, so as to stop non gamers using them. Slow GL-Select and older drivers working better then newer ones ring a bell?

As for benchmarks, we should at least have a benchmark pack that can run in the command line and give some useful performance results. 3D view performance would be cool too but less easy to automate.

yeah, and this is phoronix, THE over hype. :slight_smile:

Im running OSX 10.5 and Ubuntu 9.04 on my iMac and I must say I prefere beeing in ubuntu. gimp and ink scape perform way better here.

From what I tested, Blender performs a little better in Snow Leopard than in Leopard. There’s something odd going on where some apps are a lot slower and others are quicker with OpenGL. They will hopefully be fixed by 10.6.1 but Blender is not affected.

Sunflow being 50% faster is quite a jump and there’s a big improvement in the Monte Carlo benchmark and the Tachyon raytracer, probably all related.

I wonder if the 64-bit tests were accurate as 10.6 boots a 32-bit kernel by default on anything except the Xserve - you have to hold down 6 and 4 while booting to get the 64-bit one. This has dramatic improvements for server-based stuff like databases.

Yeah, the devs need to get the OpenGL context ported to Cocoa so we can get 64-bit and multi-threaded OpenGL (needs 1 line of code or something).

Of course, if you guys are doing raytracing on Durian and Macs cost 50% more but do raytracing 50% faster then I guess it comes out even except you get a nicer desktop (system-wide font-anti-aliasing, PDF printing, window double-buffering, system-level standard media API) and you can natively run Photoshop, Illustrator, Final Cut, Premiere, Fireworks, Dreamweaver, Flash, Logic, Acrobat Pro, After Effects, you can actually afford Shake, Color, Cinema 4D, ZBrush, Mudbox and if you just need to chill, Call Of Duty 4. :slight_smile: <- is this the smug emoticon? He looks kinda smug.

Ok, that’s just being cruel.

@osxrules

  • To infer that Macs are twice as fast for all raytracing because of sunflow is drawing a long bow. Since sunflow written in java, it seems more likely that some part of java on the mac is faster especially for such a big difference in performance.

  • Having blender work on 64bit macs is not a small project, a developer was working on it for quite some time but didn’t finish it.

  • The situation with mac/win having better commercial app support is well known, But using commercial apps would make durian less open, people couldn’t use our files/tutorials etc without buying some app, this goes against the purpose of an open movie.

  • Comparing API’s and other aspects of an OS can go on forever… ok its slower but it has nicer fonts, api’s?.. Its the same tricky api’s that mean Blender isn’t running in 64bit yet. (or our lazy dev’s, or BF/Apple for not funding it, take your pick).

I’ve never been a fan of mac…well anything. I think you hit the nail on the head when you stated…“Who uses sunflow anyway”…there was a time it was “NEARLY” an attractive renderer, but that was what…2-3 years ago?, I use windows…I like it about as much as I like osX…unfortunately a lot of commercial dev is restricted to those two os’es…I hate it, I would love to go back to ubuntu.

50% not double. It was also around 50% in the separate Monte Carlo benchmark and the other raytracer benchmark. I was kidding though, benchmarks are just another category of statistic and can’t be trusted no matter who comes out on top. Well unless it’s Geekbench of course (ignore servers and overclockers) :RocknRoll::

http://browse.geekbench.ca/geekbench2/top?page=1

I guess you’d have to compile all the dependencies as 64-bit too but if it’s been done for other platforms, it should be possible. Maybe mixing the C++ stuff in causes some headaches. Since the OpenGL stuff would be Cocoa, you’d have to use a special Objective-C++ style of working, which isn’t that nice, especially with a code-base the size of Blender’s.

Call of Duty 4 would be ok though. You have to take a break some time. :wink:

I don’t think the APIs are completely to blame nor the devs. The transition to 64-bit causes issue for a lot of people. On the Windows side, those who chose to go 64-bit have had to deal with driver issues and software incompatibilities. Apple tried to make it so that you didn’t pick either 32-bit or 64-bit but you get both at once so that you don’t have so many incompatibilities. This meant it wasn’t ready until 10.5 plus they still had a separate architecture to deal with as did the devs.

After 10.6, PPC is no longer supported and Adobe have even dropped CS3 support. There’s no rush to 64-bit but PPC support is officially done so if it’s easier, Blender from 2.5 onwards can be Intel-only and highly optimized. There comes a time when legacy code holds projects down in the interests of compatibility. Houdini for the Mac doesn’t even have a 32-bit version.

Call of Duty 4?, nethack, bzflag and tuxracer should be enough for anyone!

64bit blender port was a problem because Apple dont have a 64bit Carbon api. So blender needs to use Cocoa.
From what I understood having our C++ ghost code interface Objective-C’s Cocoa isnt simple.

Building deps and other issues are trivial in comparison.

These tests don’t prove anything, as they don’t use the features in Snow Leopard that speeds things up.

What makes 10.6 Snow Leopard fast is GCD (Grand Central Dispatch) that handles threading in a clever way). Also OpenCL, which offloads tasks from the CPU to the GPU (or vice versa) can speed things up enormously. Snow Leopard is also designed to run apps compiled with Clang & LLVM. The test linked to above only compares GCC performance, which is besides the point.

Even considering all this, the only areas where OS X looses are with games that are not well optimized for the platform.

Hey William! fair play :), to reply to your points…

  • Cant speak for the improvements in OSX’s threading but I remember brecht had to fix some bugs that only affected OSX because its threading model was so much slower then on linux and windows. So good on them for improving but its yet to be shown how well it compares against other systems.

  • You mention OSX loosing in games only, if you read on, there are many tests with other applications that measure threads/OI etc.

  • Clang and LLVM are awesome projects, but Clang is only a C/ObjC compiler, its C++ support is incomplete if you look at their webpage theres a lot missing, For eg. it fails to compile blender.
    Aside from that I did some tests with Clang on linux recently and found it slightly slower then GCC for rendering (but very close 4% slower, using GCC for C++ and Clang for C which all render code is written in).

So GCC will be used for a while yet, OpenCL and Clang have great potential saying it makes the benchmarks irrelevant is speculation. OpenCL well be available on other OS’s too, probably well before blender makes any use of it.

Edit some blender renders with Clang and blender from ZR http://t1.minormatter.com/~ddunbar/blender/times.html
There are some artifacts, no doubt these glitches can be improved over time but the clang/llvm/opencl devs.

I think the point being missed is that benchmarks are supposed to be fair across different platforms, doesn’t make a lot of sense to compile with clang here, gcc there and icc somewhere else and be able to compare results in any meaningful manner.

So (other than marketing departments) they take a standard set of ‘tests’ and run them to ‘measure’ the abilities of the underlying OS without resorting to optimizations that would skew the results towards one system or the other.

One could also argue that this test is biased against Ubuntu since they didn’t compile a custom kernel that has all the latest wizz-bang features turned on and fine tuned the system for these tests…well, if anyone really cared enough to make that argument that is.

–edit–

Oh, and as a bit of an aside I read a while back that one of the BSDs were working on porting clang over as their default compiler so it shouldn’t be too long before c++ support is up to speed.

But Carbon has been slated to be deprecated for a while. While the issue may not be simple to fix, if you guys are really going ot support OSX that is something that was known and should have been addressed already, no?

Its the Same with Adobe they knew Carbon was getting dropped but they act like they got caught with their pants down or something.

Either way, lets hope the issue gets resolved at some point.

@apoclypse, right on, probably because its a nasty job no mac devs wanted to pick this up yet.
“if you guys are really going ot support OSX” - a problem with volunteer projects, since its confined to developers who use macs too.
Maybe mac users can get some funds to emploty a mac dev to do the port if their really keen.
…or switch blender to use X11 on OSX :), its supported isn’t it?

@Uncle Entity, to me its not about the benchmark being fair, its about it reflecting what the user would experience when doing these tasks. (without tweaking the system much/at all).
If ubuntu apply some patch to GCC or OSX use Intel-C++, OpenCL or whatever, thats fine as long as this what you’d get as a user.

The reason I don’t buy Williams argument is that so many of these technologies are hyped up and until we see the final result you cant be sure. Maybe a few years ago you could claim that CUDA would be the next big thing and any C program benchmark was invalid because it didn’t test with CUDA.
Probably OpenCL will make a few things blazingly fast in certain situations, otherwise things will trundle along as usual.

Re: OpenCL supporting C++, Its taken them 3 years to get this far, at the speed Clang/C++ dev is moving it could be a while yet. Remember CUDA was C only, so Im not optimistic about this getting done soon.

Ideasman: It depends what you want to measure, and what your definition of performance is. As far as I’m concerned, performance isn’t about kernel IO speed, and RAM access times, it’s about real world application usage, the cumulative effect of optimizations and technologies combined, to create a responsive and pleasing piece of software.

Performance can even be subjective, because an app may feel faster than it actually is, by ensuring that the UI is constantly responsive, and using other tricks that make things feel fast.

Additionally, technologies like OpenCL and Grand Central Dispatch are not hazy, vague ideas. The built-in apps in SL use these technologies now, and you can feel the difference when using these apps. The advantage with GCD over other threading solutions is that it manages threads so that you always get just the right amount of granularity, which is the problem with manual of fixed threading, such as what’s used in Blender today - too few threads and the system sits idle, and too many and the system spends too much overhead managing those threads. Additionally, making apps threaded is almost no work, while avoiding issues like deadlock.

Agreed on the GCD thing. Its pretty damn cool if you ask me. SL idoes in fact use it and for heavy applications like the ones I use (which are n;t usually related to 3D, but audio). SL’s performance sweeps the floor with its previous incarnation. Teh finder alone has seen a significant boost in performance as well as general disk IO. This is all due to threading and how the system uses the resources it has smartly.

Opencl is not some nebulous lib that may or may not see any real world use. Whiel this isn;t related specifically to Opencl since it was written with CUDA, VRAY RT is a shining example of using the gpu for what its good at. Opencl may not have any real implications now but I’m sure it will be used heavily, especially when it comes to 3d software, in the future. Perhaps most of the compositing stuff in Blender can use opencl for blurs masks, poitn tracking, etc.

First off, I love Macs, been using them since 8.5, I still keep my G3 nearby for sentimental reasons, but, sometimes, their great technologies and advancements are hard to support, so you don’t really see their true potential in a lot of cases, plus they advance so fast that the next big thing comes along, requiring this new support again. Apple is really bleeding edge, almost too bleeding. The iapps support their new features, but it seems that it is hard for 3rd party companies to keep up, or want to keep up with the limited user base. Just my opinion from a long time user.
As a side note, as a broke-ass user who can’t afford Adobe and other commercial apps, it is hard for open source projects to keep up with Apple, and who really wants to do graphics in X11 with no wacom support? To me, Linux was the best choice of OS for FOSS.

@William & apoclypse, wasnt suggesting these technologies were vapor-ware, just that many real world apps dont take advantage of them, to say it WILL make a difference in the future is speculation - what if blender never gets OpenCL support?
Agree there is a lot more to it then raw performance, but in the same breath - with all these new technologies, the fact that it looses most data-crunching benchmarks to ubuntu is not insignificant.
Even though the tests dont go into UI responsiveness or (where Im sure ubuntu would loose), compiling software or compressing a file, game FPS are still a real world tests.

To be fair on OSX they may also make a tradeoff, better user experience for slightly worse performance. This is reasonable for user OS vs a server oriented system (the linux kernel).

@sausages, X11 wacom support sucks at the moment (though it works for me), before 2.5 Id like to fix this, using hard coded strings at the moment is pretty crap.