out of memory problem while running two mixed Cuda cards

so when use two mixed CUDA cards (1070 and 750ti), blender doesn’t actually check how much memory left on one card, if the scene is too big to 750ti, it just won’t render on 1070+750ti, but if I set to use only 1070, it renders just fine, adding a 750ti here is not a big deal anyway, just wondering if there’s a way around it so blender really utilizes both cards.

for dual card rendering, the whole scene needs to fit in each of the cards.

As Deadalus_MDW stated your scene has to fit into both cards memory buffers.

As GTX 750ti has 2GB of memory where as the 1070 has 8GB, you are stuck with the lowest capacity of 2G for the scene.

Unsure when but wasn’t there a “fix” for the GTX cards to use your system memory for render? But I guess that was not yet implemented based on the issue reported.