Rendering scene on GPU cluster

I’m trying to render a scene on a remote server with GPU cluster, this is the used setup:

  • Centos 7
  • Blender 2.8
  • 4 Nvidia 1080 ti

while rendering I can see that all the GPUs take part in rendering but the utilization is around 33% at peak and the memory usage is low as well around 15% at peak. Rendering one scene takes ~5 minutes while on my laptop computer with single GTX 1060 it takes ~2 minutes to render a single scene.
What can be the reason for that? How can I make more efficient usage of my GPU power?

This is the code block used to assign the GPUs:

        preferences = bpy.context.preferences
        cycles_preferences = preferences.addons['cycles'].preferences
        cuda_devices, opencl_devices = cycles_preferences.get_devices()

        cycles_preferences.compute_device_type = 'CUDA'

        for device in cuda_devices:
            if device.type != 'CPU':
                print(f'Activating {device.name}')
                device.use = True
        bpy.data.scenes[0].cycles.device = 'GPU'

The resolution of the scene is really low (300X200) hence expect the rendering to by much quicker than 5 minutes.