Cluster of Clusters of Clusters!

Hello, world!

Not been on here in a long time. I really have no excuse for that… At any rate, I was wanting to post this somewhere where I knew people could point at it, give feed back, laugh, etc., and have a good community full of computer geeks…

Sounds like Blenderartists forums (though I still want to say Elysiun sometimes).

Anyway… I’ve made a sort of outline for what I want. The idea is that a bunch of people connect remotely (via high-speed LAN) from a dumb terminal to a server. But, the speed and resources of the server will be as if they are using a local computer, even though 300 other people are reading/writing data, performing computer simulations, plotting the end of the world, and downloading the Internet… All at the same time.

The server would act as if it were just one computer. A single filesystem, with /home/ directories for all the users using the dumb terminals… A single operating system, and so forth. However, I want it to actually be cluster of computers.

In order for this to be as fast as possible, I figured… Maybe have a separate cluster of computers for each task. A cluster dedicated to storage (RAID 5 array, perhaps), a cluster dedicated to CPU intensive tasks (a diskless cluster with just a ton of CPUs), one for GPU intensive tasks, and so on. The dumb terminals can all have local graphics cards and connect via remote X11 to the server… So how about one cluster that is dedicated to splitting up all the data for the other clusters?

Is there something like this already in existence? I sorta want a hybrid between a load-balanced cluster, and a Beowulf cluster… Or, a Beowulf cluster of Beowulf clusters, of possibly more Beowulf clusters.

Here’s a rough diagram of what I want. The different colors/line styles represent different tasks (GPU, CPU, Disk, or whatever):


Welcome back!

Hey Tynach: Haven’t seen you in a while, welcome back.

I believe a variant of your proposal is already well developed to an extent

This is already being used by a company called OnLive to allow you play current gen PC games at maximum settings over the internet without the need for the latest in hardware, all you need is a small set-top box
http://www.onlive.com/#1

Thanks kbot, and Ace Dragon :slight_smile:

Cloud computing isn’t quite what I want. This sorta happened when I first tried running a program over SSH X forwarding from my desktop to my laptop… OnLive is sorta kinda what I want, but not quite. Also, it needs to be all local-ish, in one building. I do NOT want to have my resources/data used all at a completely remote location. Too much network lag.

You mean like a localized version of Grid computing?

Just change it to having a bunch of computers in the same building (or room even) working in concert rather than a bunch of computers scattered across the country.

Hmm, not really.

The idea, is that there is one operating system for this group of computers, and thus it acts as one single computer. Say everyone in the building wants to run Blender. There will be one copy of Blender stored on the cluster of computers, and a ton of disks, CPUs, GPUs, etc., that the different sub-clusters use to actually run Blender for 300 users. The 300 users see it appear on their desktop, and they simply model/texture/light/whatever, using the same copy of Blender, but with read/write speeds, CPU resources, RAM, graphics resources, etc., all seemingly dedicated to THEIR workstation, because there are so many computers in the cluster working on it.

I sorta had this as a thought exercise, and wondered if anything like this already existed.