Blender on PS3

“… and have fun with Fedora Core 5! You will be able to install any app as long as it has a PPC build of it.”


and since BLender has a linux PPC build, i am assuming it will work out of the box, with 6 cores to render with??! :smiley:

if no one has done the test yet, i am willing to give it a try… But first, I need to ‘aquire’ a PS3. :confused:

[edit]: seems no distro has hardware acceleration for the RSX yet, so only framebuffer upto 1920x1200.

PS3 Linux Documentation

wanting to try this out too. Hope that the nvidia will release a graphics driver in the near future. that would rock. Maybe we will even see companies add memory to them for a fee… Like they have to some of the pocket pcs?

Did you see Tons comment on one of the last patches about baking? Now blender has dual core baking and working on multi… That would be great…

I read a interesting article about using ps3 for a render farm… But this actually seems to be a quite heated debate right now. Will we will see I guess. But there are allot of people to make it work for work.

fun Fun fun

check out looks like there is a shipment coming in here on the 1st I wonder how many people know. I am tempted to make some money before get one for myself.

It would be real wonderful if Blender can run on PS3. But I cannot affort one anyway, so…

I wouldn’t buy a PS3 specifically to run blender, not actually:

  1. The memory of the PS3 is limited to 256 MB, limiting the editor and the sculpt
    mode (maybe in the future the memory use will be optimized, actually that memory will force you to stay under 300k polygons).

  2. is still not clear if the cell sdk released from sony takes already advantage of the 6 spe, and if unoptimized code can take advantage of them (through gcc).

So I think I would expect some time to get more info, if the only purpose of
buying a PS3 is blender/rendering.(things get better if you need a mediacenter,
some home computing or/and a console)

Not feeling so cagey about PS3 anymore

I am pretty sure the PS3 is making use of the cores. from the video I watched on google it showed lots of penguins. Which I believe means lots of cores recognized. I counted 8. I would doubt terrasoftsolutions and others would want to do a big release for a 8 core system that would only run on one core. There is enough big interest folding at home… etc. To make this really happen.

Nvidia has been decent about giving out linux drivers. The only thing that I see that would keep them would be money. They have their own render system released recently.

But bringing it back home check out what Ton has gone and done.

Making Blender or any other application actually use these 7 spe’s requires changes to the code. It won’t happen automatically since there’s a number of hardware constraints that must be dealt with.

Personally I think that without signifcant reengineering and low level optimization of the render engine specifically for the Cell processor, a typical Intel or AMD with 4 cores will outperform a Cell processer easily. You can get great performance out of the Cell, if you are willing to spend time optimizing for it like ps3 game developers. And given the limited memory, (current?) lack of hardware graphics acceleration, I’m not sure this is such a great machine for running Blender on.

I am not sure I understand? Please explain what needs to change???

If Linux is running making use of PS3’s cores. See
Which it looks like it is with 8 penguins.

Then isnt it up to the developer of the app(blender) to make sure it makes use of the multi cores.

I think that gcc version for PS3 will compile blender for that version. The new news about multi thread(8 core) support. Where is the hang up? Maybe its not a optimized build but it should work right?

Interesting article—A-Revolution-In-3d-Rendering-And-Graphic-Applications&id=365371
forum discussion interesting
development happening

interesting tid bit about memory
“But don’t forget that 256mgs of GDDR3 planted directly on the Nvidia RSX chip, that the North Bridge is not a chip, but the RSX and Cell in a direct link! That the “Cell” speed is 3.2 ghz coupled with that 256mgs of XDR memory also clocked at 3.2 ghz through the Flex IO direct to “Cell BDE”! Well that’s just the way the 512 mgs of XDR work on IBM’s and Mercury’s Cell Blade Server! EXACTLY!!! What everyone fails to realizes is that when you have memory running the same speed as the processor, it reduces the size requirements of using slower larger memory. There is no need for the physical buffer above that which can run off the Sata Hard Drive Virtual Memory!”

I think that you are right that a quad core will out perform a ps3 now. I fact I saw a test that wasnt that fair didnt really take advantage of PS3 multi cores. That made it run like a G5 1.6.
But there is chance that it might make a nice render cluster if server up right. and hey if it works I will do that for 500 instead of 5000 for a top end quad system now.

I am not sure I understand? Please explain what needs to change???

The other cores are not normal CPU cores. They are specialised vector ones, and if you want to use them you need to change the code/compile in a very good compiler that knows about them (I think IBM are developing such a compiler).

It’s not like running an 8 core computer.

Usually when you run multiple threads, the application will create some threads and the operating system will divide the over the cores that you have. With the Cell processor, these SPE’s have a number of limitations that make it impossible to run the same code as the main processor.

For example, each SPE only have immediate access to 256kb (not mb!) of memory for storing both the code to execute and the data to work with, anything beyond that has to be specifically requested, and preferably in advance if you don’t want to suffer too much performance loss. The Blender executable and typical scenes take up much more. To make that work changes need to be made (request memory as needed, splitting up the code, …), and Blender’s render engine is not a small piece of code.

Some changes (or optimizations) cannot be done by a compiler, but require a human to put in a significant amount of work. So until someone volunteers to do this, these SPE’s will go unused.

when you have memory running the same speed as the processor, it reduces the size requirements of using slower larger memory.

that is plain wrong. size!=speed when it comes to memory. Can you store 100GB of file on an uber fast 36GB raptor drive? NO. End of story. 256MB of RAM is really limiting, and I guess it will be only good to use as a rendering farm for high-res, large ray-traced scenes launched from the command prompt.

on second thought, considering the price of a Core 2 Duo, the PS3 isn’t all that much better…

Hardware acceleration is not an issue. The Cell can talk to the framebuffer at 4GB/sec. That’s over 800fps at 1024p! It should be possible to optimize Mesa for the cell (using SPEs), and it would spank anything lower than a GeForce6.

The other thing that people don’t realize, is that each SPE has two dedicated DMAs which mean that it can tell the DMA to load 128KB of memory while the SPE is processing annother 128KB of memory.

Look at the cell as a stream processor. That’s what it really is. Stream the data in, process it, and stream it out. I think we should be able to see some really cool renderers for the PS3. May mean a new one from scratch, but it is possible.

the newly released geforce8800 is also a stream processor, with about 100 scalar(not vector) processors running at 1.5ghz and has ~600mb of DDR3 RAM. The best part is that nvidia is releasing a CUDA sdk which is suppose to be like a C compiler for the GPU…
It would be interesting to see which offers better performance, although the geforce8 would probably require a full re-write of codes for any app to run on it.

yeah I hope that we can get more performance from Blender with either of these. we will see I guess. Hey checked out your demo reel Mpan3. Nice I am start to work on one too…

I actually have a copy of the CUDA sdk. I can’t talk about it (one of those stupid trade secret agreements), but I will say that it will rock the HPC world. Writing a renderer to use it would be a sinch. It basicly allows you to write C code as if you had 12-16 processors in the system when you only have 1+GeForce8800.

really?! ok questions:

when will the NDA become void? if ever?
is there a software path that can emulate a 8800, useful if the dev machine doesn’t have one.
how the f**k did you get the cuda SDK :confused: :mad: :frowning:
does it depend on an exsisting API such as DX10 or OpenGL2?

i know someone who got the cuda sdk too.

i think everyone can apply for it and later (i think at the moment it’s still beta or something like that) it will be a free download without the requirement for a nda if i understand this correctly.

cuda also works in software if you don’t have a 8800. of course this is slow.

it doesn’t depend on opengl or directx.

both points are a relief to hear. /me refreshing every 10 minutes :smiley:

He’s right about CUDA on all points.

Apply here for the sdk: