Hi there. Long time since I’ve been here, nice to have some time to spend in Blender again.
I’m near completion of the Raspberry Pi Cluster v2, when it’s all wired up I will be using it for rendering, it may not be the most efficient use of the Raspberries, but if they’re not doing anything else they may as well be churning away at Blender files.
I have posted a topic on this a long time ago, I’ve lost it, so I thought I’d start this one.
This cluster is 128 Raspberry Pi 3B+ each with 4 cores, with some rudimentary testing some time ago the output rate was actually pretty good.
Another reason for building this cluster is to venture into MPI coding with Python, this got me thinking about this subject again. With mpi4py I see no practical reason why the render code couldn’t scatter tiles of a frame out to all available processors and gather them again. I’m making some pretty large assumptions here and there may be technical reasons why this would be extremely difficult, I really don’t know. But wouldn’t it be kind of fun to have a branch of the render code that could do this?
Setting Blender running on each node of the cluster to build an animation is not difficult at all, but I wonder if it’s the most efficient use of the resources. An MPI version of the render routines would also be able to churn away at a still frame, allowing the cluster to build a test render and not tie up my development/design machine.
I really do think there are legs to this, many, many people are building small clusters with pis and other small single board pc’s. There’s a practical use for this, there’s an academic opportunity for this, and again let’s consider the fun value!
I would love to hear back from a dev or two, let me know what you think.