Posting this here, maybe it will be useful to someone out there in Blenderland.
I have a computer that has a good render card and a good CPU for a homebrew - a Hackintosh i7 4970k and a GTX 780 Ti. I thought to myself, “self, I could probably render two images at the same time”. Indeed, a short experiment later proved that to be the case. I rendered two of the benchmark BMW scenes simultaneously (one via CPU, one via GPU), taking only 10 seconds longer to render both than it would to only render one on the CPU, at about 2 min 59 seconds. (A single CPU render takes 2:49, and a single GPU render takes 1:20. Optimizing tiles reduces times to 2:40 for CPU and 1:01 for GPU.)
Then I thought, “self, I could use this with Netrender.” I made the script (attached), and indeed I can use both GPU and CPU simultaneously.
In actual use, it is equivalent to 1.5 to 1.75 render nodes because the feedback mechanism I made steps down the thread count on the CPU when needed, and also steps the GPU down to CPU usage when the GPU gets too hot for my comfort. I find that a CPU render takes roughly 3x the time of a GPU render. When the GPU node shifts to CPU, the GPU gets a break for the equivalent of 3-7 frames depending on what I am rendering. So, during that time, the CPU is doing twice as many jobs… usually only two and they take twice as long as if I did them individually. During that time i lose efficiency while the GPU cools off.
If I had better cooling, I could even shave more time off the jobs. My temp threshold is 60 C, but I could probably go up to 65 C without too much risk.
I have written up additional assumptions, notes, etc, inline in the script attached “DeviceManager.py”. Of course, it is written for OSX in my case, but can be easily modified. You will need a helper app, and two copies of Blender to use this - one Blender contained in a folder called “BlenderCUDA” and another in a containing folder “BlenderCPU”.
See the images, below. The first shows both the logging and the CPU and GPU usage graphs after the Netrender has just started… the graph and log showing how the GPU node instance switched to CPU rendering to allow the GPU to cool. The second shows a detail of the CPU and GPU usage graph when the render is well underway showing cycles of GPU usage and cool down. You can also see that the actual CPU temps are up near 70 C even with my 60 C threshold. This second shot is after 25 minutes of use.
Attachments
DeviceManager.py.zip (4.37 KB)