Internet distributed 3D rendering with Blender

Hi there all!

One of my first posts here on Elysiun, so better make it spectacular - or at least lengthy :smiley:

Yesterday, after having gone trough the usual reading of cool 3D art posts in this forum, I got an idea. I was wondering why there were no more cool animations - and obviusly it is because it takes far too long to render a decent high-res animation when the rendertime/frame is >10mins.
Also I was thinking if there were some kind of way to make use of all those wasted CPU-cycles out there on the internet to do 3D rendering.

A couple of hours later I had a basic client and server working and today I rendered the first frame using them. I use BOINC (a framework for making distributed computation - made by the developers of [email protected]) and YafRay for the rendering.

Well. After having seen that it is actually possible to do distributed rendering that way (although very simple at the moment) I decided to post here to see if anyone else would be interested in this. (Yes, you are reading the post right now :wink: )

The idea (I’m not even close to that yet) is that a lot of people sign up on this “project” and download some client-software that is capable of rendering. By rendering frames for animations you earn points.

If you would like to use the distributed network to render one of your own animations, you can go to the website and submit the animation for rendering. It is completely free to do so (but only non-commercial stuff may be rendered).
If the distributed network is not already busy rendering you can use it without having any points at all. If another render is in progress you need to have at least XXXX points to start yours (XXXX determined by the size of your animation).
That way you will even be able to get something rendered as a newcommer if you are lucky. People who have done a lot of frames for others will get higher priority when they submit something themself.

As I mentioned above no commercial animations are allowed. Also the finished animation should be uploaded to the website so that all contributors can see what they helped make possible. Copyrights will of course still remain the animation-maker’s.

If 100 people sign up - each of them having in average 1.5 computers with an average of 1.5GHz that are iddle 50% of the time - you get an equivalence of 112.5Ghz of rendering power. (not taking network transfer, dual CPUs, hyperthreading etc. into account here as it would make it difficult to calculate). Since the application is only using iddle-time the 100 people won’t even notice that there is a difference on their systems. Whenever they need the CPU-power to do something else (gaming etc.) the application simply stops using as much of the CPU.

Usually computers are inactive in more than 80% of the time that they are turned on - but I wrote 50% because then I don’t promise too much :wink:
Just want to mention that [email protected] and other projects not only have hundreds of participants but millions! (now try to redo my calculation with those numbers instead :smiley: )

A few more facts:
- It renders an entire animation in parallel (ie. as many frames at the same time on different computers as possible)
- It uses YafRay (blender supports export to YafRay-xml so that shouldn’t be a problem - only need to rewrite very little stuff)
- It’s ment to be easy to install and be almost transparent when installed
- Other distributed projects are using the same client, so you can even share your cpu with them as well
- The other projects are sometimes iddle, which means that their participants’s CPU time will go to this project instead!
- As a community we can help each other


But!

Getting from where I am now to something that just slightly does what I wrote above isn’t going to be easy. So I need your help!
There is a lot of stuff that I don’t know about - and you do. And there is a lot of stuff that people can help doing as well; I don’t think I’m able to do all this alone.

Right now a lot is missing:
- A cool looking website (I’m a programmer, not that much of an artist)
- A nice looking test-animation to use for debugging and for testing new features
- Loads of information about how you usually use blender and YafRay
- What’s the average size in MB (with all textures and meshes) of an animation?
- What’s your favourite animation framerate, size, bitdepth, format etc? 30fps? 720x500? 24bit? tga/avi?
- Export directly from Blender (as the YafRay plugin now, just modified a bit)
- Some kind of easy-to-use upload and animation submission page as well as a way to download the result.
- Hosts with large (>=2mbit) bandwidth to use as upload/download servers for textures etc.

It is probably going to take at least 3 months to half a year to get this working perfectly, but it’s not at all impossible.

I hope that with the help of our great community we will be able to make this happen!

Anyone interested in the idea?
Anyone interested in helping out?
Any comments and ideas? Questions?

The link to the project (everybody keeps asking about it): http://burp.boinc.dk

I’m very interested in the idea, and I read a few topics about renderfarms for blender animations (frame by frame ones) or even distributed rendering of single images. These are recent posts (less than 2-3 weeks, perhaps less) you should search for these and see how these projects could merge with yours.

I can’t help with anything but sharing the iddle ressources of my computer (ATHLON XP3000+ 512 Mb RAM), which i’m willing to do, even if I’m not an animator myself (but perhaps I could use the web resources to render stills by this mean, one day :wink: ).

Something puzzles me, though: my current scene (see my sig to see the WIP) is packed and tar.gz’ed but the blend file is more than 30 Mb (almost 70 Mb un-targ.gzed…). This is for a STILL and I suppose that for an animation, the blend can get more heavy. So the biggest issue would be that the blend files will have to reach ALL client computers, be unpacked, unzipped, before any calculation could even start. I suppose the XML files for Yafray are not quite so different in size than Blender’s own. So my question is: how the net-computer will handle many huge files at a time???

Cool idea, anyway 8) just hoping it’s technically possible.

I could draw up a website later, possibly. I like the idea. A pretty powerful server would be nessecary for this kind of work, using a database of frames done, points, members, the files. At 100 MB a pop, for 100 people, that is 10 GB of bandwidth. Maybe google mail could handle that? But that wouldn’t be very integrated.

Hi I would like to participate in gving up some idle cpu, and if yay need any artwork just specify!

couple of questions… what language are you using? And what license is it under?

I’m very interested in the idea, and I read a few topics about renderfarms for blender animations (frame by frame ones) or even distributed rendering of single images.

I also took a look at some of those posts - but the idea is quite different. They plan to use in-house rendering farms, I plan to use computers on the internet.
YafRay supports rendering parts of an image, but doing it in a distributed fashion is only going to speed up the process if there are loads of reflections and raytracing and the resolution is very (!) high. But it’s possible - and quite easy actually. So that’s probably going to be one of the first features in this project. (just remember that only frames where the render-time would have been more than 1 day are good to distribute like this. Anything lower would be easier to render locally)
This is very much an experiment (perhaps a bit ahead of its time) - but in a few years internet speeds will have doubled and then we suddenly have a far more capable project. And if, at that time, we already have a userbase of 20-30 people supplying the project with CPU power it will be a pretty powerfull renderfarm.

So my question is: how the net-computer will handle many huge files at a time???

and

At 100 MB a pop, for 100 people, that is 10 GB of bandwidth. Maybe google mail could handle that? But that wouldn’t be very integrated.

There is no doubt that getting the files from the maker to the renderers is one of the big difficult parts of this.
The plan is that the maker uploads a single file to a master server. This server then cuts the file into frames and textures (or even subframes) and uploads to 10 other (fast) servers. These 10 servers are then used to distribute the work to the renderers (perhaps not all of them, depending on the size of the animation).
When work is done each renderer returns the rendered frame as a gz’d tga-file (everything happens automatically - no need to copy files and stuff). Using google-mail would be weird, and a pain in the %#"! :smiley:
The maker could also setup a fileserver to further speed up the downloads (especially if he/she also has several computers in-house. Then they can download the media-files over the local network instead of from the 10 servers).

A pretty powerful server would be nessecary for this kind of work, using a database of frames done, points, members, the files.

Yes, or a powerful set of servers. This is distributed computing - even the controlling servers can be spread over several computers and internet connections.
A betatest of some of the projects I mentioned before with 20.000 users was run on a single server having only 2Ghz and 1GB ram. Once again in this project it is harddrive space that counts - the returned images will use up 1.7GB for 2mins of animation.

I could draw up a website later, possibly.

Cool! - make it look 3D-like - it has to match the idea of the project :wink:
You don’t have to make it right now - take your time (this project isn’t going to be finished tomorrow).
If you would like to know what should be the contents of the site, then have a look at this temporary website (no, don’t create an account or download stuff - there’s not even any work to render yet, the only client is for linux as of now…and don’t expect things to be working): http://burp.boinc.dk/

  • and I haven’t found a nice name for the project yet…

Hi I would like to participate in gving up some idle cpu, and if yay need any artwork just specify!

It’s great to see that people are willing to share their iddle CPU-time with the rest of the Blender/YafRay community. That makes working on this so much more interesting.
Artwork - actually I need something to use while developing. 2 things:

  1. A test-animation without textures that uses a few of the features in YafRay. Prefferably half a minute worth of animation. Each frame should render in about 1 min (it is for testing purposes to see if the client handles the files correctly, so don’t want stuff that takes an hour to render each frame).
  2. Later on, a larger test-animation with textures that takes far longer to render each frame (more than >5mins). Possibly with high poly-count and detailed textures to test transport of media from maker to servers to renderers.
    Number 1 would be very nice to get quite soon actually - I still need to learn how to animate using Blender :smiley:
    The animations don’t have to look cool, but it sure would be a plus :smiley:

I have distant plans to add the blender renderer to this project as well. But it requires a different handling of files and can only do an entire frame at a time, so this is probably a feature that will be added when the YafRay part has been thoroughly tested and proven to work. Other command-line renderers could be added later as well.
YafRay works on almost any platform (anyone tested it on Playstation or XBox?) which is why I want to start out with that.

couple of questions… what language are you using? And what license is it under?

Everything is programmed in C, C++, PHP, Python and a few other dataoriented languages (as HTML, XML, MySQL etc.).
The website as well as the client you will download is multilingual (several translations of the client already exist - 27 different languages if I remember right).
The project is going to be opensource because the stuff that is involved (YafRay, Blender, BOINC, apache, MySQL etc.) all is opensource.
YafRay is LGPL and BOINC is under a special BOINC license (compliant with LGPL or GPL I think - hopefully LGPL). Since the project is using a modified version of YafRay we need to stick with those licenses (which is ok for me since both are pretty free).

I still would very much like comments and ideas

  • and perhaps a few answers to these questions:
  • What’s the average size in MB (with all textures and meshes) of an animation?
  • What’s your favourite animation framerate, size, bitdepth, format etc? 30fps? 720x500? 24bit? tga/avi?

sounds very interesting.

I don’t know if a centralized server would necessarily be needed. I was looking at loading bit-torrent: http://bitconjurer.org/BitTorrent/
and it occurs to me that the model it uses to transfer files, keeps track of each users contributions. Therefore, a distributed renderer project might be able to use a similar model. Aspects of BOINC http://boinc.berkeley.edu/

and bit-torrent could be the ideal (free) methodology.

Regards,
Mike

Bit-torrent is a very interesting concept - with working implementations. However not all of the aspects and features are needed to accomplish what is needed in a distributed render project. A simple tiny http-protocol bit-torrent-like server that some of the users agree to download would be able to do the same. However I want to keep these two things (rendering and distribution of data) very seperate, so it would be a second client that users can optionally download in order to contribute even more than they do when they render.
There are a lot of aspects to look into with data transfers.
Currently everything is sent non-compressed and the rendered images are returned as non-compressed tga’s. So my first priority is to get everything working and then optimize it afterwards.

But the bit-torent idea is noted - thanks! :smiley:

would it be possible for user that wanted to share their comuter for rendering to just logo and just wait for data to be sent or would it need to be a list from a main server and you would need to wait to be added? ALso what would happen if a computer crashed? woudld that frame go missing?

A solution for the huge files is rendering in parts! Kinda like the blender render in parts feature. A computer might render half a frame while another renders a quarter of the same frame and another renders 12.5% and another render 12.5% of a frame. Each will send the computed segement of the image back to the headquarters where it is stitched together into one frame.

You could thus send packets containing only portions of a frame (these packets could be anywhere between 100 kb and 1MB)

Just an idea you might want to play with.

What happens when one of the computers recieves a file to render, but then becomes in-use, preventing the frame from being rendered for a couple hours? The point of using a renderfarm is to speed up rendering, but in this case might slow it down because this client has this reserved frame, but it’s not rendering at the moment… How would a situation like this be handled?

don’t make every client render whole frames, that would definetly slow things down. and what happens if you have say a 250 frame animation where every frame takes 2-3 hours to render for a top of the line machine? that means that we could only have 250 clients running the render, and most of them would take a whole lot longer to render it, what happens if one client doesen’t have enough memory for a whole frame…?. break it down into smaller segments. lets say we have every client render a 128x128 resolution chunk, or even smaller. that way one job would definetly finish quickly and at the same time we could use a whole lot more processing power (and clients).

It’s been discussed many many times in many formats like FuBo etc.

And it’s been ruled out as impractical for more applications and is useless for stills.

don’t make every client render whole frames, that would definetly slow things down. and what happens if you have say a 250 frame animation where every frame takes 2-3 hours to render for a top of the line machine? that means that we could only have 250 clients running the render, and most of them would take a whole lot longer to render it, what happens if one client doesen’t have enough memory for a whole frame…?. break it down into smaller segments. lets say we have every client render a 128x128 resolution chunk, or even smaller. that way one job would definetly finish quickly and at the same time we could use a whole lot more processing power (and clients).

That is my basic idea. Computers would be given enough to render based on their specs and connection speed. If they connect long enough, we could take an uptime pattern and base the amount of data off that. If some machines are connected to the internet longer than other machines, it would make sense to use those machines a lot more!

What happens when one of the computers recieves a file to render, but then becomes in-use, preventing the frame from being rendered for a couple hours? The point of using a renderfarm is to speed up rendering, but in this case might slow it down because this client has this reserved frame, but it’s not rendering at the moment… How would a situation like this be handled?

How does your computer work? I’m just curious, because I have what is called a hard drive. It is a peice of hardware that stores information for long periods of time when I am not using it. They are pretty common now days and most people have them (especially the 3D artists :D)
(end of joke).

Computers are meant to store info. What had been rendered and what is to be rendered is stored in two files. A third file would be used to store temp data from renderer. On WinXP or Linux, you could set the renderer to low priority. It would take longer, but with millions of computers, it would not make much of a difference.[/i]

Hah this could be secretely installed with blender! Muahaahh!

:slight_smile: I was simply referring to the fact that it would delay the render, and thus it might impede the progress of the final animation. eg. There are 10 computers, each given 1 frame to render. All the computers but 1 finish within a minute, while the other computer’s processor is so busy that it doesn’t finish rendering for an hour. This slow computer has the current frame checked out so it won’t be passed to any of the other computers. How do we deal with that?

how about if a computer finishes, and all the files are checked out, but some are not done yet, it takes on those files and works on them, and when the other, slower computer is done the faster ones stop working and discard their progress, but if the faster ones finish first, it notifies the slower one to stop working.
Also, maybe a computer can signal to the rest that it will not be able to finish the render, as someone has just increased the proc load, and the others take up the slack.
Or, just have a screensaver, that also shows the progress, but when someone interrupts it, not just bumps the mouse accidentally, the computer signals the other ones that the render is dropped, or it just u- checks out the file and it becomes available to any other computer just like a new render normally.

how about if a computer finishes, and all the files are checked out, but some are not done yet, it takes on those files and works on them, and when the other, slower computer is done the faster ones stop working and discard their progress, but if the faster ones finish first, it notifies the slower one to stop working.
Also, maybe a computer can signal to the rest that it will not be able to finish the render, as someone has just increased the proc load, and the others take up the slack.
Or, just have a screensaver, that also shows the progress, but when someone interrupts it, not just bumps the mouse ally, the computer signals the other ones that the render is dropped, or it just u- checks out the file and it becomes available to any other computer just like a new render normally.

GREAT IDEA! It could be like a public library (for those of you in the USA). Processors could check in and check out parts of the render. When the processor becomes busy (or the screensaver interrupted), the computer checks in the render and submits what it has already rendered plus the data it needs to render. Then other computers that are still idle (or still running the screensaver) could check out what hasn’t been rendered and go to town.

Nice to see all this interest in the project!
I’ll try to reply to you all:

It works like this (no, not now :wink: ):

  1. You go the project website and register with your email
  2. An mail is sent to you with a code
  3. You enter URL to the project and code in the client
  4. The client takes care of the rest.
    It will basicly sit back and wait for work. Once in a while it will check if there is any. You could run other projects than BURP (the render project) while it is waiting - for instance finding cures to illneses, search for aliens etc. (these projects already use the same client).
    The client has the ability to weight the projects that you participate in. That way you can give BURP weight 99% and everything else <1%.

Not much to do about it - except from rendering it again on another computer. The distributor (the server) already takes care of this when the client times out.

This is only effective enough when dealing with very large raytraced images. Also you would still have to include the entire scene in the workunits, so the bandwidth saves are not really that large. However it may save a lot of time on large animations. I plan to support this feature at some point. Yafray already does.

The idea is to send each frame to 2 computers. This is for several reasons:

  1. If one of the computers become unavailable (crashes, program is uninstalled, not turned on etc) the other one will finish the job
  2. If one computer is heavily overclocked / unstable / hacked etc, and sending back wrong results the validator will se this and request a 3rd computer to render the frame.
  3. Yafray has a little randomness in the way light is calculated, by merging 2 almost identical images on top of each other the image quality becomes very (!) high.
  4. In order to compute the amount of credit each user should be given there is a need to find out how much that data is worth - this is more stable when having 2 sources of information instead of 1.

If the maker of the animation wants it he will be able to disable this feature and simply render each frame on a single host (renderer).
This gives a lot of flexibility (speed versus quality versus correctness).

As mentioned before: subframe rendering will be an option that the maker decides to use or not. Some animations will get faster render-speeds others will not. Also a 250 frame animation only equals about 8 seconds of high quality movie. I expect the average animation to be a bit longer than that.
On memory: Yafray uses a memory friendly way of rendering, you can render some scenes on 32mb ram if want to. Also the scheduler (the server) will try to only send out work to clients that can handle it. If a client is given work that it cannot handle it will fail and send back that as a message to the server. The server will then reassign the work to someone else.

There is no doubt that its force is with animations with rendertime/frame > 5min (or perhaps even >30min on animations with loads of textures).
It can, however, be used for very large stills with a large amount of raytracing and lighting.
The good part is that you can use your computer to blend the next animation while the network renders your current one. And when your are not blending you can render other people’s animations in return. So instead of having to do: blend, render, blend, render etc. you can now blend, render and blend, render and blend, render. (Was that one clear?)

Yes, this is the plan. Uptime patterns etc. are important when doing stuff like this.

By sending the work to more than 1 machine as mentioned before (if the maker of the animation wants it ofcourse).

This is called trickling. BOINC supports this but I haven’t yet looked into how it works. This is probably going to be one of those features that are added at some later stage when everything else is up and running. But the idea is great.

(I guess I have the longest average post length on Elysiun at the moment, I better stop writing these long posts :wink: )