Render Farm Questions

I am a little curious about maybe setting up a small render farm and have a couple questions about how it works. First off, is there a way to get each node to render part of a still and then have it combined into one image when sent to the master, or are they only able to each render one full frame (e.g. each render a frame of an animation so it gets done faster, but each single frame still takes the same amount of time)? Also, is a CPU render farm worth it for Cycles, or should I go for a GPU render farm instead? And if so, will I need a very strong processor at all, or just one that has enough power to open the scene?

Thanks!

1.) I do it by rendering the same scene on 8 CPUs with a relatively low sample count on each render.
I then combine it back in Gimp to get an equivalent of a 8 times sample render. The problem with this method, apart from being a pain in the ass, is that it does not scale linearly with computer power added. A working netrender would fix this. Lets say a frame takes 10 min on my current PCs ( which are all the same). I now add a nice Intel 8core rig that renders the frame in 3 minutes. How long would it take to get 9x sample image. 20 minutes. How long would it have taken if I had just added another cheap Phenom X6. Right, 20 minutes.

2.) The maintenance part together with how you render makes it unfeasible for small scenes that work well on GPU.
If they had a Luxrender style netrender that would rectify the situation a bit IMO but to be honest GPU rendering is very much more convenient than having an assload of computers sitting around in your crib, collecting dust until you use them once in a forthnight. Not saying that it doesnt have its benefits, but simply switching to GPU is simply a faster workflow and more convenient if your scene fits.

3.) The real advantage of GPU to me is that you have the power readily available at your fingertips. This requires you have the cards installed in your main rig- but this is very possible. It is not possible to add 10 CPUs into your main rig unfortunately. I wouldnt build a dedicated GPU box because it just slows your workflow down. I can benfit from render preview if it is in your rig. If it is tugged waway you cannot do it.

This post describes how I set up a CPU render farm. The numbering has nothing to do with my previous post.

1.) First of all, you should ask yourself if you truly need it. I always liked hardware, so I had a strong desire. At the time I build it there were no really good tutorials on how to optimize your scene, and the branched integrator was not in it iirc. Overall Cycles was brand new at the time.

2.) Do not underestimate the choice of hardware. Back in 2011 I settled with Phenom x6 CPUs because they were dirt cheap, I could get mobos for 29.99€ with an integrated GPU just to have a picture for troubleshooting and RAM prices where about half of what they are now. If I were to do it now I would most likly settle for Xeon 1230V3, get the cheapest mobo I find, and pop in PCI gfx/ HDDs from the junkyard. You do not need more than one HDD actually but it is a pain to set up. Also make sure to have a seperate room where you can place the render farm. I put them in 10.00€ cases because it was quicker and cheaper than slaughtering an IKEA Helmer and allows you to easily sell computers off if you need it. I use the stock coolers. Of my 10 nodes, 8 are still running. They had RAM failing, and upgrading to 16GB again would have cost more in 2013 than the whole node in 2011… So I sold them. Also do yourself a favour and buy absolutely the same haedware.

3.) Software: This is actually were the trouble starts. You got to have a good understanding of Linux- simple as that. You probably can use Windows, but it is a memory hog and you (still) lose a good deal of performance over Linux. If you are savvy you can actually PXE-boot them all from one HDD, if you settle for a lightweight distro like Ubuntu Server. That actually worked fine if I didnt use extended partitioning- but I needed 10 primary partitions and to this day I still dont understand the errors I was getting. I was using this guy´s summary, but used hpatftp instead http://www.calvin.edu/~adams/research/microwulf/ . With like 3 systems this guys setup is guaranteed to work.Like I said I ended up putting in junkyard HDDs simply because I didnt know you to fix the PXE-errors when booting from one giant extended partition. Linux geeks would have probably easily figured it out.

So what I basically have is a notebook sitting next to 8 PCs. The notebook and 8 PCs are on the other side of my crib. I can only reach the notebook via WLAN. The notebook is connected to a switch with the 8 PCs. You cannot remotely power on via WLAN, that is actually the only reaon the laptop is used at all. So I log in from my main machine, into the laptop where I have placed scripts that remotely start the 8 PCs. I upload to a designated Folder to the laptop 8 different blender files each set with a seed from 0-7 using a special naming scheme. On the render PCs each PC runs a script that is looking for a file with that name and than renders it, and saves it back onto the laptop. By then I have 8 renders, I run the script to shut the 8 PCs down. I then transfer the 8 renders from my laptop on my main machine, set them up in Gimp and get more or less the equivalent of a 8x sample render. The notebook only uses like 10 Watt when doing nothing, so it is always on.

Yes, it is a tedious process. And it could be simplyfied with myself having better scripting skills and blender finally a working network render like Luxrender does. When that happens, all you would have to do is autostart the Netrender and be done.

If this stuff scares you, let it be. I probably will not do it again myself. Too much lifetime wasted for trivial things IMO. Got a better understanding of Linux and networking from it, though. If you are still interested in it, be sure to set up the desired amount of machines you want to buy in VirtualBox and test stuff out thoroughly.

Awesome, thanks for the response! I was mostly just curious to see how it works, and to get some advice so if I end up getting one down the road because I’ve decided that as much as I’d like one I don’t need it right now. Your posts were extremely helpful, so thanks for taking the time to write them! Just curious, how exactly did you go about combining your 8 renders. Did you just stack them on top of each other with 12.5% opacity each, or did you give them a blend type like overlay?

No the system goes 50 %, 33% etc. until you reach the last layer. It is one of the redundant tasks like I mentioned, You could probably make this a script inside of blender or gimp as well. My point is: With a lot of GPUs you will not have to go through that much hassle at all. Personally I think the rig I have now would be much better suited for animations than stills - and in that case I think you could use the Network renderer. But I never do animations, like OSL and the proposed ability to render “anything” is quite intruiging. That being said I would have rather bought something that boosts my capabilities by 8 times. Truth be told, as a single individual I hardly make good use of that rig and spent much time and effort building it. Atm the only thing that justifies it for me is OSL and Blender/Cycles lack of render time subdivisions.