Hello, world!
Not been on here in a long time. I really have no excuse for that… At any rate, I was wanting to post this somewhere where I knew people could point at it, give feed back, laugh, etc., and have a good community full of computer geeks…
Sounds like Blenderartists forums (though I still want to say Elysiun sometimes).
Anyway… I’ve made a sort of outline for what I want. The idea is that a bunch of people connect remotely (via high-speed LAN) from a dumb terminal to a server. But, the speed and resources of the server will be as if they are using a local computer, even though 300 other people are reading/writing data, performing computer simulations, plotting the end of the world, and downloading the Internet… All at the same time.
The server would act as if it were just one computer. A single filesystem, with /home/ directories for all the users using the dumb terminals… A single operating system, and so forth. However, I want it to actually be cluster of computers.
In order for this to be as fast as possible, I figured… Maybe have a separate cluster of computers for each task. A cluster dedicated to storage (RAID 5 array, perhaps), a cluster dedicated to CPU intensive tasks (a diskless cluster with just a ton of CPUs), one for GPU intensive tasks, and so on. The dumb terminals can all have local graphics cards and connect via remote X11 to the server… So how about one cluster that is dedicated to splitting up all the data for the other clusters?
Is there something like this already in existence? I sorta want a hybrid between a load-balanced cluster, and a Beowulf cluster… Or, a Beowulf cluster of Beowulf clusters, of possibly more Beowulf clusters.
Here’s a rough diagram of what I want. The different colors/line styles represent different tasks (GPU, CPU, Disk, or whatever):