GPU render farm from mostly spares

Hi guys,

I know these topics no doubt pollute the forum, but in fairness I’ve not done a good job at searching around here. (Hands up - sorry!)

I am looking at the start of a third day rendering out a 1080p 260 frame animation on my GTX980, it’s chompping away at 1024 tiles at around 35-40mins per frame.

I happen to have a X58 motherboard and i7-930 sat in a box doing nothing and thought crossed my mind to pick up 3 x GTX650/750 ti on the used market and put them to force labour in rendering. However that would be ~£100 to £150 of gfx cards that more than likely would be out paced by a single £150 gfx like a GTX780?

This machine won’t get a whole lot of use, so the onward march of technology will soon eclipse the second hand equipment used, what with eevee just around the corner.

So if you’ve got experience or some tasty links to blender benchmarks of cards like the 650/750ti I can soon evaluate what my return could be. Since the total spend is looking to be £250-300 anyways what with case/psu/storage.

So I found this thread

Pulled the spreadsheet and answered my question.

Yes a trio of 650 Ti’s would perform very well, with a single 650 Ti being a mere 30 seconds behind the average GTX980.

That’s a lot of bang for buck considering a GTX650 can be found for £30-50 on ebay.

Just remember that they may have significantly less ram than you need to render some scenes.

Ah good point and well noted. I’m glad you mentioned it as I’ve not ran into this problem since the early days and my GTX280.

Do you know if 3 GPU’s per rendered frame would spilt the memory requirement across all cards, or would it be limited to the singular card limit.

i.e 3x "650 ti 1GB"would it be 1GB restriction or a 3GB limit?

Is your bucket size really around 32 pixels on a GPU? GPU performance will slow at that size causing longer renders. Some cards deal with a 16:9 ratio better than square. I use 480x270 on mine in GPU mode and 32x32 in CPU. This will also have a lower peak memory size. Try lower samples, reduce bounces to 4 but rarely higher than 6 and use the new denoiser to your advantage.

Of course I have no idea how complex your scene is, so I could just be talking out of my ass.

3x 650ti 1GB would be restricted to 1GB scenes. if I recall correctly there is some work to work around this, and for example use systems main memory when GPU memory is nearly full… but performance would be lower.

So depending on your scene complexity, it might not render at current time.

Also you might consider that if you render… rarely. what about renting out a render farm? Granted that is involving spending money, but it might be cheaper over a year then building your own local system that is almost never run.

Additionally your main system. having that one GTX 980. is there room to add a card to that system instead of the x58 system? If there is a PCIe slot, my recommendation is save up and get another GTX 980. (or 970)

And agree on tile size.

For GTX 980, start with at least 256x256, or the recommended 480x270. Both will give you a significant boost in render times.

Sorry a little confusion me thinks, the tile size is 10241024, which turns each frame into 4 tiles. I found minimal performance difference in total lapse time shifting from 512512 to 1024*1024.

I could go the route of adding cards to my current machine (R7 1700 & X370), however i want to free it up from rendering tasks so I can create stuff, and when required add it to the render farm pool.

Nice call on the VRAM limitations, I think I might end up going the route of the advice here and add another GPU for now, then start collecting worthy additions until I can justify off loading the demand onto the i7 platform.

Thanks, you possibly saved me from making a costly mistake.

thanks for the feedback on the tiles, original post “it’s chompping away at 1024 tiles at around 35-40mins per frame.” made it feel that you have an HD scene split to 1024 tiles. my bad.

One thing to remember when you have more gpus in main system, now with blender 2.79 you can select which of the GPU’s to use. I have 4 RX 480s in my main node. I can set 3 to render, and 4th to display and work since display only uses single GPU at the moment.

Slight issue with above is that blender doesn’t identify which one is the main one for the screen. But if you have different GPU’s then it will be easy to set one to render in the background and the other to continue working on.