This post describes how I set up a CPU render farm. The numbering has nothing to do with my previous post.
1.) First of all, you should ask yourself if you truly need it. I always liked hardware, so I had a strong desire. At the time I build it there were no really good tutorials on how to optimize your scene, and the branched integrator was not in it iirc. Overall Cycles was brand new at the time.
2.) Do not underestimate the choice of hardware. Back in 2011 I settled with Phenom x6 CPUs because they were dirt cheap, I could get mobos for 29.99€ with an integrated GPU just to have a picture for troubleshooting and RAM prices where about half of what they are now. If I were to do it now I would most likly settle for Xeon 1230V3, get the cheapest mobo I find, and pop in PCI gfx/ HDDs from the junkyard. You do not need more than one HDD actually but it is a pain to set up. Also make sure to have a seperate room where you can place the render farm. I put them in 10.00€ cases because it was quicker and cheaper than slaughtering an IKEA Helmer and allows you to easily sell computers off if you need it. I use the stock coolers. Of my 10 nodes, 8 are still running. They had RAM failing, and upgrading to 16GB again would have cost more in 2013 than the whole node in 2011… So I sold them. Also do yourself a favour and buy absolutely the same haedware.
3.) Software: This is actually were the trouble starts. You got to have a good understanding of Linux- simple as that. You probably can use Windows, but it is a memory hog and you (still) lose a good deal of performance over Linux. If you are savvy you can actually PXE-boot them all from one HDD, if you settle for a lightweight distro like Ubuntu Server. That actually worked fine if I didnt use extended partitioning- but I needed 10 primary partitions and to this day I still dont understand the errors I was getting. I was using this guy´s summary, but used hpatftp instead http://www.calvin.edu/~adams/research/microwulf/ . With like 3 systems this guys setup is guaranteed to work.Like I said I ended up putting in junkyard HDDs simply because I didnt know you to fix the PXE-errors when booting from one giant extended partition. Linux geeks would have probably easily figured it out.
So what I basically have is a notebook sitting next to 8 PCs. The notebook and 8 PCs are on the other side of my crib. I can only reach the notebook via WLAN. The notebook is connected to a switch with the 8 PCs. You cannot remotely power on via WLAN, that is actually the only reaon the laptop is used at all. So I log in from my main machine, into the laptop where I have placed scripts that remotely start the 8 PCs. I upload to a designated Folder to the laptop 8 different blender files each set with a seed from 0-7 using a special naming scheme. On the render PCs each PC runs a script that is looking for a file with that name and than renders it, and saves it back onto the laptop. By then I have 8 renders, I run the script to shut the 8 PCs down. I then transfer the 8 renders from my laptop on my main machine, set them up in Gimp and get more or less the equivalent of a 8x sample render. The notebook only uses like 10 Watt when doing nothing, so it is always on.
Yes, it is a tedious process. And it could be simplyfied with myself having better scripting skills and blender finally a working network render like Luxrender does. When that happens, all you would have to do is autostart the Netrender and be done.
If this stuff scares you, let it be. I probably will not do it again myself. Too much lifetime wasted for trivial things IMO. Got a better understanding of Linux and networking from it, though. If you are still interested in it, be sure to set up the desired amount of machines you want to buy in VirtualBox and test stuff out thoroughly.