Remote Renders .....an idea


(acasto) #1

What would people think about this. (There’s no way I could afford it now, but maybe in the future, and with a little clustering technology!!! )

It would be like a remote render farm. In which there could be prices like either monthly unlimited access, or a package or something. The system would be set up with remote interfaces, network render dameons, raytracers, shaders, etc… Then you would have the ability to send things to it to be rendered…then it would send them back or drop them in your remote bin for you to pick up. This way you could create things on your system, and not have to worry about wasting system time and be able to make bigger animation and such. You could do more for less.

It’s an idea I just had, so I have no idea if it already is out there somewhere, but let me know if it is, and what you think

later 8)

Adam Casto


(acasto) #2

Would it be worth it, even if it wasn’t like a super fast renderfarm to begin with, just so that you could continue to work while something is rendering? I was thinking about maybe just a collection of 350-450mhz systems would handle pretty decent. They are relatively cheap now days, and it’s more of the number of systems that would determine the speed, just divide the total number of frames by the number of systems.

A cable or DSL system would probably suffice, do you think? All that would really need transferred is the file and textures. There could be a database system hooked into it with a pre made library of shaders and materials, then you could have like a shader/material menu in which you could access and call them strait from the server.


(pannomatte) #3

You know I was wondering the same thing, but with a twist. In theory, and this would be a community project, to write a program that made use of all these broadband cable/dsl connections to create a virtual renderfarm. In other words active participants would all be running a custon “screen saver” application, that would, when the computer was inactive log itself into a virtual renderfarm. I mean how hard could it be. NSPR is open source and cross platform. It was a wise choice IMHO to buld the blender network renderer on. Suppose you had 100 machines/users in the “community” and 30 or 40 were available at any given time. How fast would it be. I mean you could potentially make it really easy to implement by imbedding all the relevan IP tracking info into a custom Homepage that each user would use as there homepage. Done with the machine for awhile, Hit home and join the farm…Just musing…
There are no stupid questions, but there are a lot of inquisitive idiots.

regards
Daniel


(haunt_house) #4

maybe it would be possible for stills. But would not the traffic slow down the whole thing? If I take a normal complex scene, how many hi-res targas have to get shipped? I still have problems getting the concept. especially for animation. It would be another thing for an intranet.

:-?

HH


(acasto) #5

That might really work…there is ways to control how much of resource it would use. So it couldn’t hurt anyones system. Kind’ve like SETI does it, I think this is called distributed computing. You could combine the capabilities of router technologies, to calculate and store a table of IP address and hop counts, so it could build a virtual network of paths to send frames to. Then with the network render daemon, all you would need is a network/process control program to interface between the internet and the daemon. But the issue would be, how to decide project prioity and access so everything isn’t rendered, just decent sized projects, to save bandwidth and resources. Maybe a system in which you could email for a project ID to enter into your interface, which would assign resource and process limits and give permission to start the job.

@haunt_house…that would be an advantage indeed to having a single render farm in which you interface form your system. That way you would have a personal space in the account, then you upload your textures and images once, and the server would call them as needed. It would be an intranet with and internet link


(haunt_house) #6

ouch

I start getting scared :frowning:

any millionaires here?

this could really work.

But until then, I save money for an AMD 1.8. and a crossover cable.

HH


(pannomatte) #7

Well you could do two things to keep it democratic, Do what the fine folks at elysiun.com do and do a donate paypal, and or, build up credits for amount of time you have availed your rescource to the project. (For those of us who are credit hazards as opposed to credit risks.) Actually you could probably make it an income generator for a worthwhile project such as aforementioned elysiun.com. Make it work with blender and then expand the project to include BMRT and other programs. Maya etc. Am I wrong or can you use a simple packed file for network rendering, or do you need a seperatte locale for textures etc. I can’t remember.
Muse on…


(acasto) #8

I’m not sure about the packed files for rendering, but I had a couple idea on how to take care of files and textures. You could have a private space on the main (or linked) server that you would upload your necessary files to, and/or set up a texture database, in which users could add to. Then you just use the global server links from your root file.


(pannomatte) #9

To test the theory maybe we could round up some volunteers, have them install render and NSPR on their machines, get someone with a T1 line to set up renderd and NSPR and do a test file? Each participant would enter their dynamic (cable/dsl) IP adress into NSPR and set it up on a specific time and date? Probably have to generate a quickie homepage for the project with detailed instructions on how to participate in the test. I’d be happy to host it at:
http://www.aaaskywatcher.com/BlendArchive


(acasto) #10

If you could right a little structured post with some explanation and links of where to get some of this stuff, I’m sure we can get something going


(pannomatte) #11

I need to update my site ayway and I think I’ll do that tonite or tommorow. We need a catchy project title how about
RENDERAMA
suggestions? :smiley:


(acasto) #12

or RENDERNET…or I just checked, and if anybody was ever going to get a domain name “myrenderfarm.com or .org or .net” is available. Or “renderme.org” sounds good…


(pannomatte) #13

Theres allready a
http://www.rendernet.com
I like Renderme.org


(rivenwanderer) #14

To me this sounds like it would work best on scenes with long render times but relatively small output files {<5meg}, but I admit I’m biased, being a 33K dialup user :slight_smile: Wouldn’t the “pack data” thing take care of everything in terms of shipping the .blend off to someone else to render it?


(Timothy) #15

The only way I canm see this work is if you code a render deamon, which will connect to a central server.

Then to render something you connect to the central server, and upload your .blend to be rendered. Than the rendering process would be shared by everyone connected to the server.

If you however have say 1 central renderfarm,… and a community with 600 or more people uses it,… it becomes just as fast as your own computer if say 5 people are using it at the same time,…


(acasto) #16

Good idea about the central server Kib. That way there would be one point to organize everything. With the central serverfarm, it could get bogged down without a good process organizer. A scyld cluster may be able to do it though, the BProc process access system is very efficient at handing out jobs. But I’ve never tried it so I have no idea.
:wink:


(LethalSideP) #17

This is quite a coincedence. About a week or so ago I read an article in Scientific American about distributed processing, and my mind wandered straight to rendering :slight_smile: Did any of you catch that article?

It was really interesting, actually. It raised a number of very interesting pointers of where the internet could be headed in a few years time. It all started a few years back when SETI released that screensaver which scans through radio data, analyses it for alien signals, and sends it back to SETI. Soon others started catching on - there’s one for finding a cure for anthrax, one for finding the next prime number, one for fighting cancer - they’re catching on. What Kib said is right though - you do need some sort of central server coordinating the whole thing, and this is the one part of distributed programming which is really holding things back at the moment. The fact is that there’s nothing available at the moment which can handle inputs from clients left right and centre. Just think about the logistics of it - you have one job to do, and two connections to spread it across. Fine. But one of those PCs is a crappy 386 (well, you get the idea) sitting on an ADSL line, but the other is a pentium 4…except the pentium 4 is only connected via 33k. Which do you use? Half way through the job, the 33k user decides his phone bill is running through the roof, and decides to disconnect. Ah. There goes your job. So now what? Do you send the job to the other PC? Do you wait for another PC to connect, and send that the job? Or do you hope that the 33k user will eventually reconnect, and give you the results of that job you sent it so long ago? You see what a nightmare this becomes with tens, hundreds, nay thousands of CPUs…

You also have to realise that not everyone will want to give their PC up to something like this. The fact of it is (I’m sorry!) that humans can be very self-centred, and the ‘what’s in it for me’ approach kicks in (you’ve got to admit, it is true!). Some people have suggested that a way around this is to pay people for their CPU time. But then you get into the even trickier area of ‘how much is a computer worth?’. Going back to my example above, does the slower CPU on the faster connection or the faster CPU on the slower connection get paid more? Is a faster processor more importnat than connection or vice-versa?

Programming for these sorts of things is also a nightmare, unfortunately. Those logistical problems I mentioned further up are difficult enough to think up solutions to - imagine trying to program them!!

Provided people do eventually code their ways around these things,we can expect some very powerful things to emerge from the dustcloud. Suppose Pixar get a few days inbetween projects with that large renderfarm of theirs doing nothing - the economical thing to do is to put up these resources on the net!! (I know - wouldn’t that be great :wink: )

Eventually, selling your CPU time over the net could become a viable possibility. People may even start setting up renderfarms in places like Alaska, where cooling isn’t a problem - they just open the doors to the place! Less electricity, more economical…

So where does Blender fit into this then? Well, I’ve been thinking long and hard about that one, and it’s one of the reasons why I think Blender becoming opensource would be such a good idea. If Blender did become opensource, with the size of the Blender community we probably could eventually put together a ‘stand alone renderer’ screensaver type idea. People download it, and it renders what it’s sent, and sends it back to some central server which coordinates all these signals, and puts them back together. This would put Blender head and shoulders above the likes of Maya, Softimage, 3DSMax, because all of a sudden you’d have a FREE modelling, animation and rendering package that could render across as many CPUs as you like, regardless of cost. Packages like Maya you pay for a per-CPU licence, but Blender - you can have as many as you like. Your only limit is hardware, not software, all of a sudden.

It’s not an easy problem to program a solution to, but the rewards would be out of this world… Anyway, just me adding my little bit.

LethalSideParting


(pannomatte) #18

One thing that came to mind, and could be a possible problem is broadband connections generally speaking have a much slower uplink rate than a downlink rate. What do you think the minimum uplink would have to be to get satisfactory speed.
Daniel


(Zsolt) #19

Kib_Tph is right, think about it:
if you have many people, you have many available machines too, but since we’re talking about long renders, there will be several renders taking place at one time, which, based on lets say an average PC speed of 800MHz (I’m just guessing), will be slower than what you could do if you just saved some money and bought a faster PC, or several older ones.

Around here a 500MHz system costs around $200 or less without a monitor, you could get like 5 of them for less than $1000, which is a decent renderfarm, considering only you’ll be using it.

Zsolt


(Pablosbrain) #20

How about BlenderFarm for the name…? :smiley: