Building a GPU Renderfarm

Hi,

I’m currently running Blender on a machine with 2 x GTX 780 (3 GB). While rendering a scene with just one GTX 780, the memory peaks at e.g. 300 MB. When adding the second GTX the memory peak doubles to 600 MB. I find this both logical and illogical (but more illogical), because with just one card, I can render a scene 3 GB large. But with 2 cards activated, I can’t render any scenes larger than 1.5 GB.

It’s obvious that Blender is sending the whole scene to both cards. And since I have a total of 3 GB memory, Blender can only handle 1.5 GB for each card. This means that for every card I’m adding, the smaller my scenes have to be. Shouldn’t Blender be able to split the scene into 2 or more parts (depending on how many cards you have) installed? I’m not a programmer, but this should be possible?

Anyway, to my point (as mentioned in the title). To get around this, I was thinking about building a renderfarm of my own with, let’s say, 2 extra PC’s and 2 GPU’s per PC. But maybe I would the same limitations as above, even with network rendering?

Thank you in advance.

Regards,

Lennie

The memory should be mirrored, a scene that takes 1,5Gb should require 1,5Gb on both cards + what ever the viewport needs. Have you disabled SLI? Because you should.

Are you saying Blender crashed when you attempted to render a scene larger than 1.5 GB with both cards active?

Blender apparently is simply reporting all the data moving to GPUs. But it does seem the kernel should fit sometimes when it doesn’t, unless it is miscounting the memory used. Here’s some data. I have a scene open now that ranges from 1111.51M to 1183.00M in the taskbar. Reported numbers for rendering:

(Blender 2.70a)

780Ti with 3GB
Mem:1320.88M, Peak:1320.88M
*will render the scene

780Ti with 3GB + 2x 660Ti with 2GB each
Mem:3984.84M, Peak:3984.84M
*will not render the scene

660Ti with 2GB alone
Mem:1311.61M,Peak:1311.61M
*will not render the scene

If I open the same scene in 2.69, the last reported memory usage for the render task is different.

(Blender 2.69)

780Ti with 3GB
Mem:1916.19M, Peak:1925.46M
*will render the scene

780Ti with 3GB + 2x 660Ti with 2GB each
Mem:5748.58M, Peak:5748.58M
*will not render the scene

660Ti with 2GB
Mem:1916.19M, Peak:1916.19M
*will not render the scene

Sorry all, this is somewhat embarrassing. I was tricked by how Blender/Cycles is presenting the memory consumption while rendering. I just tried with a larger scene, where Blender was presenting 2222.22 MB while rendering with one GPU, and 4444.44 MB using two GPUs, and the render still continued.

So changing my question a bit. Should I buy 2 x GTX 780 6 GB now, or wait a genaration (or two?) for a graphic card that will support sharing of memory with with the system. I think I have read somewhere that this will be possible onwards. What should you have done?

Thanks!