I’ve been plotting and planning this project for the past couple of months, after tinkering with animation, really enjoying it, but spending 18 hours to render a 6 second clip in 4k, I thought of some alternative methods.
At first I was contemplating picking up some old, second hand servers and setting them up in a network render situation, but after doing some math and cost calculations, I figured I could get a lot more bang for my buck with a GPU render farm.
So I picked up a six-pack of used mining rx480’s on ebay, and connected them with PCIe adapters…
…via USB, to more adapters…
…which are plugged into the two 1x PCIe ports on my Crosshair VII mobo, which run through the chipset so as not to take away any PCIe lanes from my GTX1070.
Also needed, was moar powah;
Now…as it turns out, the 1600, even in this configuration, is overkill. Under a full rendering load, the entire system pulls 800w, where I was expecting closer to 1200. I guess that just means I have room for two more rx480’s considering the PSU has two more VGA 8pins to use still. (Freaking monster has x9 8 pin outs!)
OK, OK, so what about it, how does it work. Well, for a baseline, I rendered the classroom scene, in 2.79b with a tile size of 320x180 on the GTX 1070 by itself, and it rendered the scene in 8:18.47
With the render farm…
Now…can I render it even faster without sacrificing any quality? Gimme some ideas if you think I can.