Smart Phone / Android Render Farm?

So, today as I realized I was coming to possess a surplus of unused-but-not-broken-or-completely obsolete smart phones…I was trying to think of some uses, and the idea of a smart-phone based renderfarm (no display, all command-line) might be plausible with the speed of today and tomorrow’s smartphones…

I did some google searches and came up with basically nothing. I was wondering if any smartphone/Android enthusiasts might be willing to play devil’s advocate for me and explain why that is the case?

My first port of call in this thought exercise is the fact that Blender HAS been ported to an Android tablet. But, the projects haven’t really gone anywhere, and I think that is because the “creative” portion of CGI is not well suited to the tablet/smartphone arena. In addition, it seems like a waste of processor on a tablet/smartphone to put all the energy into drawing 3d objects in realtime.

If we recall, Blender is also a command-line application. What is stopping someone from porting Blender command-line, or even JUST the renderer and its necessary file-storage structures, to a smart-phone? And then using WiFi and a laptop/desktop/server as the “Brain” to divide up and send out tasks? It’s not inconceivable to rather cheaply procure 50 smartphones (especially if they have bad EINs and cannot easily be activated on any network…we only need WiFi)…if each were to render one frame?

If that worked…perhaps there is even the possibility for a dedicated Android ROM that strips everything unnecessary from the phone and allows it to devote a maximum of resources toward its function as a mini-renderer? I’m sure that there ARE certain processing and memory limits, given that phones are lower powered than desktop machines. But I’ll tell you, my phone is generally faster than any computer I owned prior to 2009 or so…and I render MANY complicated scenes on those lower powered machines.

If I had the chops…I would try to put my money where my mouth is…but I don’t really. Although, I would love to facilitate that happening. I would imagine that for an initial proof of concept, neither Blender nor Cycles is the first place to start (although, again, we DO know it is possible to port them)…

But perhaps this could be a place to begin:

MiniLight is a minimalist proof-of-concept global illumination renderer that accepts specially formatted text files as scenes. It’s author encourages it to be rewritten into as many different languages as possible (about 6 of which are on that site). Interestingly, as far as Android goes, it is also already ported to Java:

Perhaps starting with this minimalist renderer, and simply working on the idea of using a desktop server to farm out several different scenes to several different phones with a port of this renderer could be used as a proof of concept? And then the farmed scene could be raised in geometric complexity to verify that there isn’t an exceedingly low complexity ceiling. I am not sure about MiniLight’s capability for texturing, but as a simple proof-of-concept, the fact that it can output a REALLY GOOD physically-lighted (lit?) Cornell Box is as good as a starting point as any?

From there, if there is success…perhaps we could then look to Blender’s GUI-deprived command-line, and perhaps the production of an open-source, freely-distributed ROM and application devoted to Blender network rendering across smartphones…

In my own time I am certainly trying to learn the basics of writing some simple applications for Android…but if we have any expert Java/Android developers here…would anyone be willing to make a MiniLight renderer proof-of-concept? I would be floored by a 5-frame 5-smart-phone-rendered Cornell Box animation. And from there…

EDIT: It turns out there’s even an exporter script to MiniLight renderer from Blender on the page linked above. Seems like the proof of concept can even start with Blender scenes, just in a simplified renderer! And then move on to Cycles for the coup-de-grace…

TL;DR: Will someone please build an Android smart-phone render farm, either with MiniLight minimalist renderer, or ultimately, with Blender’s cycles. If this is NOT possible or feasible…why not? Detailed explanations please. With a detailed explanation, even if I do not know enough to understand the reply, it gives me the opportunity to learn and COME to understand the reply.

Thank you

You are aware of the typical RAM amount of a cellphone? Everything you ask for is pipe dreams currently and the outcome would not even be faster than modern a single PC.

Yeah, after you’d gone through all the trouble of setting that up and buying a bunch of high-end phones, you would just have been better off buying yourself a nice GPU, which would be way faster and have more memory.

I am aware of the amount of RAM on a cell-phone, and how much faster a “modern single PC” is and that a nice GPU is fast. That’s missing the point entirely.

The point is that upgrade cycles on smart phone are absurdly short, phones that are barely a year old can be had for less than 20 bucks in many cases (free if you ask your family), they take up barely any space, and they WON’T interfere with your ability to work on your primary computer during rendering.

I’d envision it more as a “set-it-and-forget-it” kind of contraption, where speed is NOT the goal, but rather a system of distribution that is non-intrusive. It is about harnessing otherwise unused processing power.

Sure, one computer is faster than one phone, and one good GPU is faster than one computer. But I have ended up with ~20 smart-phones, all less than 3 years old (most less than 2), in a very short period of time, and for virtually no cost. I know each phone isn’t going to come anywhere near a computer with a nice GPU…

But, if I get 20-30 running, each just working on one frame before getting farmed another, and I’m not worried about get “this next scene done” ultra-fast so that I can regain use of my super awesome GPU for more modeling or games or whatever you wanna do…letting 30 phones run 24-hours a day for a week…where during that week I don’t even think twice about the rendering, just be content that it is happening, go about my day, week, month as normal. And then come back to what could be a rather substantial set of frames for an animation that, had I rendered on my primary or even secondary workspace, would have been Hell to manage render time and work time.

Consider this:

I understand that the point of that article is that the guy put a graphics card in his old computer and ran the Cycles car benchmark in :53 seconds. That’s what I’m pointing out though. He ran the same bench mark with just the CPU in just over 7 minutes. Sure, that’s annoying if it’s your primary workstation and you are waiting for it to be freed up. BUT. The specs on the computer he talks about are, on paper, IDENTICAL to the cell phone I have in my hand.

So, let’s consider. I’ll even let the cellphone lag 3 minutes, so that its 10 minutes on an identical frame.

His (better) computer rendered the benchmark in :44 (yes, this was two years ago, but that benchmark and cycles haven’t changed that much. So, if he uses his primary computer 24 hours a day, for one week, to render (10,080 minutes), and each frame renders in :44 (.733 minutes), then he would render out 13,752 frames that week.

If each phone has the same stats as his old computer, and we assume they render slower (10 minutes instead of 7), that means that 1 phone would render out 1,008 frames in the same time period. That’s a whole lot less.

But I have 20 phones sitting here. That’s 20,160 frames that week. That’s a whole lot more. Plus I can keep adding to it every time some friend I know who isn’t tech savvy tosses off a phone for cheap, or family for free, or from anyone that thinks a bad EIN phone is shot, or the people that sell them to those machines at big box stores for less than 20 bucks (often free, just to be recycles).

And, that is all SPARE processing power. Not primary. For the same price as a decent GPU ($250?), if I played my cards right (and yeah, it takes some enterprising to do), I could get 8-12 phones. Which if you do the math according to my numbers above, gives you performance from about 2/3 that of the GPU to just under equal. But, that’s just the cost of the GPU, not the extra box and processor, etc. Plus, with the phones, you get to add processing power an extra $20 bucks at a time, since the opportunity to snag a nice phone that cheap isn’t constant (but it IS frequent enough).

So, is this for everybody? Absolutely not. But it seems like it’s a great idea for some people.

If it’s not POSSIBLE, then it’s not possible.

But both of the above replies basically just said “it’s stupid” without look at the numbers, the reasons, and the benefits.

“It’s stupid” is a very different answer than “It’s not possible.”

Especially if the performance of a phone and a computer with matching specs are within 30% of comparable.

Currently, Blender wouldn’t be able to do this because it doesn’t have support for mobile devices.

Second, even after doing that, it will have to be limited to Android because of the GPL license (not allowed by iOS or Windows mobile devices).

I was using the blender player build to render with blender internal on my nexus… but a single core of my fx8350 was faster than all 4 of the tegra 3. it was a very simple shadeless scene and the tegra 3 took over 1 min to render where my desktop didn’t take 2 seconds with 1 core

The Android limitation mentioned above is fine. Most people I know work strictly with android.

LordOdin, that was really helpful in understanding the feasibility. Thank you.

Keep in mind that a single cpu core of a Smartphone is nowhere near as fast as a desktop CPU core. Just think at some desktop processors, even if they have the same clock speed a single core of an i7 is a lot faster than a single core of an i3. Desktop CPUs haves a completely different architecture and set of functions. Also the number of cores doesn’t help so much, there are octa-core phones out there slower than a quad core or even a good dual core. Rendering is also a very CPU intensive task, and desktops are well ventilated, a smartphone has only a passive cooling solution so the system throttle down the speed to not harm the CPU. You would also have to think about a cooling solution.
Then the RAM frequency is way slower, there would be a lot of lag between the systems, and the CPUs would also have to care about the communication between each other, making the process even slower. It IS possible, but I think it would take a WHOLE day to render a single frame, thinking about all these factors, I would not do it, of course it would be a COOL experiment :slight_smile: but not productive and time consuming. I would like to see it though :slight_smile:

Some phones can run a GNU/Linux distro, (such as Arch), in that case Blender works out-of-the-box (at least for rendering).

The GPL is only an issue with iOS and Windows store (maybe that means you need to do some trick - I never tried it & don’t own a mobile-phone, but installing your own software onto your own device should be possible - if you’re motivated).

Still don’t think this is really an especially ‘useful’ project, but AFAICS theres no real road-blocks if you’re prepared to put in the time to set the system up. (meaning - no programming/porting required)