blender on G5 with GF6800

ok, it’s time - i’ll order a new mac with GF 6800. the specs say, it should run like hell. i ask - has anyone experinece with blender running with that card?

marin

nope… but… a g5!? :smiley: ooooh that’s awesome. hey dude let me know how it worked out. are you about to run osx or linux? cu

just tested one - unfortunately not with the GeForce i wanted…

still, this machine is really amazing. working with huge files goes like a breeze (opening a 50MB Photoshop file took some 2 seconds, 350MB about 10 seconds).

the blender related stuff:

working with the sequence editor is in realtime - PAL images can be streamed from the harddisk (so you’re not that dependant on RAM), the new (and extremely slow) glow effect is almost realtime. rendering performance is good, but the differences compared to the old G4 are not huge - at the same clock (extrapolated) the G5 is 25% faster.

viewport performance was “poor” - forgot to ask which video card was in, but according to the online-store the 2.0GHZ models are equipped with an GF 5200. i had some 7-8 fps on a 1/2 faces-scene in shaded mode, compared with 3 fps an my Athlon 600 with GF 4MX… this is not a big improvement at all. of course different applications utilizing OpenGL like MAYA or Lightwave should be tested as well - i hope that this is something blender related and thus can be fixed. i also hope the 6800 is much better.

if anyone has experience or important information to share abt. performance issues, plese feel free to put them here.

marin

Wow I’m so jealous. The G5 with the liquid cooling sounds really nice. My laptop scares the crap out of me when the fan kicks in.

Also, I’ve heard the Geforce fx 5200 is fairly poor in terms of performance. I lose track of which graphics cards are the best. Video memory definitely helps though.

I don’t know if you will have a lot of ram or not but maybe a ram disk could speed stuff up.

I reckon that for Blender to really fly on the G5, there’s going to have to be some optimization in the code. As it says on Apple’s website, the biggest advantage the G5 has over the G4 is the 64-bit architecture and faster system bus not the clock speed. Without optimizing apps, this advantage is lost a bit.

It’s exactly the same with the G4 compared to the G3. Without Altivec optimization in the software, a 500MHz G4 will run the same as a 500MHz G3 but with altivec, it can run up to 4 times faster. There was a link I posted before about how to automatically optimize for the G4:

http://gravity.psu.edu/~khanna/autovp.html

Anyway, this might help for your G5:
http://developer.apple.com/performance/g5optimization.html

Lukep might be able to implement these optimizations, I’m not sure. At least the dual-processors would speed up rendering if it was taken advantage of in Blender. Maybe you could use a 3rd party renderer that supports it.

was the version you tested selfcompiled? I think to use the real advantages of a g5 you’d have to compile it yourself…?

i used the official precompiled binary.

the last thing i compiled was… TurboPascal, “draw-me-a-rectangle”… ok i have “programmed” some JS and AS, but i was struggling with even setting up and running a dev-environment based on Mingw (C++). when it comes to modern compilers/libs/headers/includes etc. i’m kinda lost.

anyone experience with XCode? ppl say it’s very good when it comes to porting, but what about optimization?

marin

I have used XCode and it’s brilliant at helping to port code. Making GUIs is easy, especially with applescript (though I haven’t done much applescript). As for optimization, XCode itself doesn’t do much except the usual function inlining and processor specific code generation. I don’t think this helps very much for performance but I haven’t benchmarked different optimizations so I don’t know.

To get the real boost as I said, you have to change the code. I’m pretty sure this entails looking for certain pieces of code that are processor intensive and encapsulating them in some new code that tells the compiler to optimize them using processor specific features like altivec. Without altering the code itself, I doubt XCode settings will affect the performance much.

Having said that, I reckon you might still get a speed boost over the general official binary if you compiled Blender yourself because it uses the libraries etc. on your machine. I generally don’t compile stuff myself except small projects that don’t have a binary because I always get some errors that take me days to figure out and then I find that I’m really no better off because the binary runs the same anyway.

You dont use XCode at all when compiling blender. only GCC from the command line (I advise using the scons method). The way blender is build make creating a XCode project a big work for no real advantage.

As far as optimisation goes, my official 10.3 build is only optimised for a generic powerPC (at the max level, but without taking advantage of G4 specifics). I don’t even tune it for the G4 as it may cause problems for G3’s and I have none to test. My goal is making blender run on all macs smoothly.

It is sure a custom build optimised for G5 (and all libs too) should run (rendering time) strongly faster (my guess is a good 15-20%, as building only for the G4 only is already a big improvement, around 10%). but that breaks compatibility with other PowerPCs so I cannot do that for official builds. You will also need to build yafray with exactly the same settings.

Adding Altivec code would scream (refer to the intel optimized port which is 50% faster than normal), but it’s a huge and difficult task, and need someone experienced.

I have made some experiments on generic code optimizations which could profit to all platforms, but one again it is a huge task, and I have a more urgent errand on mac specific code (cf below)

For interface speed, I’m working on this actually. Performance are not good because the windowing system used by blender (GHOST) is cross-platform, but was ported to the mac at a time where the new optimized APIs were not all available. It rely heavily on old OS9 legacy code which is slow and costly to execute. this will change. So don’t compare actually with PC, it’s not a measurement of graphic card quality, but of state of the code. In fact wire perfs are not bad, but textured ones, definitively.

J-Luc (Os X 10.3 platform manager)

thanks j-luc for that information. so you say altivec optimization needs to be done on source-level? any chance of seeing a at least G4 compiler-level optimized version? i’m willing to make compile efforts myself (when i have my new mac, the current is used as aserver and i don’t wanna mess with it), mybe you can direct me to some specific information pages?

best regards

marin

Yes. It’s even harder than that. You need to refactor parts of the code around altivec and a deep understanding of how it works.

Largely above my head, You need an IQ of 250 and 5 years of studies at least.

any chance of seeing a at least G4 compiler-level optimized version? i’m willing to make compile efforts myself (when i have my new mac, the current is used as aserver and i don’t wanna mess with it), mybe you can direct me to some specific information pages?

best regards

marin

standard build is with those optimization options :

-O3 -ffast-math -mpowerpc

those are generic, and if efficient, not the fastest, but work on all procs

For the G5 basicaly it’s making it :

-fast

this will restrict the code to work only on G5. May break some libs, which will have to be recompiled.

then adding :

-falign-labels=16
-finline
-fobey-inline
-finline-limit=1024

and finally :
-mno-update
-mno-multiple
-fprofile-arcs
-freorder-blocks-and-partition
-fbranch-probabilities

The latter group must be done file per file using profiling software as they can have border effect.

i wrote everything down. now i’ll need some time to fully understand :slight_smile:

thx

marin

Yes. It’s even harder than that. You need to refactor parts of the code around altivec and a deep understanding of how it works.

Largely above my head, You need an IQ of 250 and 5 years of studies at least.[/quote]

… or just $500 should do the trick:

http://www.crescentbaysoftware.com/end_user.html

I think there’s a demo though.

Maybe that’s why I’m seeing glitches in textured planes. I’ve noticed that wireframe performance was fine but as soon as texturing is on, it’s a bit slow. I look forward to the updates and expunging that ugly OS 9 code once and for all.

BTW, as for optimizing for G4’s and not having a G3 to test, I could test it as I have a G3 or you could just release 2 versions of the software - a general build and an altivec build. I’ve seen developers do that. In fact, I think Apple’s mpeg-2 encoder does that although they don’t release a G3 version because it would be too slow. But if you try to use the mpeg-2 encoder on a G3, it just gives an error saying you need a G4.

I think that getting rid of the old code is the most important thing for now.

Sorry man but macs arent good!
Get a
Windows XP 1.5Ghz or higher
512DDR Ram or higher
50 GB hard driver or higher
Geforce Fx 5700 or higher

if u get those specs wow blender will run amazingly!!!

Ooooo BAD thing to do in a thread full of Macheads like me. We chose Macs for a reason, and we will stick with them, end of story.

And why don’t you wait until some optimization is performed? Oh, I know why, cuz if you did your PC would run home crying…

lol

believe me, i know what i’m doing

i’m NOT running ONLY blender. i still do a lot of DTP. i’ll do a lot of video editing. and blender runs fast enough. apart from that, i hope that further development will improve performance even more.

so, please, don’t tell me what to do :slight_smile:

Macs are by far the best personal desktop computers you can buy today.

I mean, you might be able to get better raw benchmarked performance out of an x86 machine for the same money, but dealing with PCs is a nightmare. If you need raw power, use the PCs as render slaves, and use a decent computer (Mac) for your workstation.

Its normal for PC manufacturers to ship buggy BIOSes, firmware, drivers etc. and fix them a few months down the track with Windows-only patches - basically the x86 platform as a whole is complete garbage, and x86 vendors are happy to ship you complete garbage.

The only places you won’t see these corners cut is in the x86 server market, where machines usually cost more than their mac counterparts.

Trying to plug in lots of PCI devices is a sure way to experience IRQ/ACPI/APIC problems, and you need major skills and experience in troubleshooting this stuff to get it all to work right.

With my iBook you plug it in, turn it on, and you’re off and running. Anythiing you can plug into it just works, and i’ve never had a firewire/USB/internal airport related issue. My IIfx is still running Apache on A/UX happily, and it rolled off the production line in 1991 or thereabouts.

I’ve had to deal with the insanity PC manufacturers put you through with their ‘buggy as f*ck but shipped to keep up with the other 16 cut-price garbageware vendors in the price category’ way too often, and I am basically not a happy PC-camper.

If you enjoy wasting hours troubleshooting the symptoms of massive design flaws in the x86 platform and it’s modern implementations to save a hundred bucks or so, then by all means buy a PC.

I own x86 PCs, SPARCs, SGIs, Macs and a bunch of others, and I have had a lot of experience messing round with them all. They all have problems and quirks, but if you want a good computer that isn’t going to make you want to throw it out the window in frustration, I’d say you can’t go wrong with a Mac.

running and understanding macs requires IMO the same ammount of time as wintel. setting up a mac for home use is a piece of cake, but configuring it for certain networking scenarios, user rights administration, getting certain software to work etc. is not and can be as “frustrating” as on the XP counterpart. at least that is my experience.

i admit, however, that things get better with every release of OSX. i have tried panther only for a short time, but at least on the surface it’s impressing.

to stay with the topic - i would still like to know if anyone could try the new GF 6800 - especially with blender

marin