yet another question about Cycles being out of memory, but I haven’t found this specific situation on the forum.
The problem is that my Cycles render gives the “STDOUT: CUDA error: Out of memory in cuMemAlloc(&device_pointer, size), line 568” error while sending the render to Thinkbox Deadline. The same scene, with the same settings, does render with no problems when I do an Animation render in Blender. But I would like to use Deadline, because I can distribute the render over a couple of workstations.
The error also pops-up on the same pc that did succesfully render from Blender itself. In other words, when I submit the job to deadline to render the scene on my local Deadline slaves, it runs out of memory.
- windows 10 pc
- 4 GPU cards (2x Geforce 970, 2x Geforce 1070)
- 4 deadline slaves configured, one for each card.
- a simple test-scene renders with no problems.
- the scene i want to render uses the Blenderguru Grass Essentials, so it’s quite heavy due to a lot of instanced particle hairs. (but it does render locally)
Is anyone aware on a difference in VRAM management between direct from Blender renders, or through Deadline?
I will try to optimize the scene some more, but I think maybe I am doing something wrong in my render setup that causes the scene to load more than is needed? I am not sure what the best way is to manage multiple gpu’s on our Deadline renderfarm.
Any help is greatly appreciated!