For NVIDIA CUDA GPUs on current Blender:
I do know that memory size does not stack up between multiple memory cards, lets say i have 3 gpus, one with 11 one with 6 and one with 3 gb. It does not mean i have 20GB potential, but it means i have 11 from my understanding.
so first question, do i have to manually set which card blender has to pick as “main” gpu?
The RAM limit is determined by the GPU with the lowest amount of memory out of all active devices. You can deactivate devices in “Preferences->System->Cycles Compute Devices”.
given that i have 11gb at my disposal, how does it work upon render time? Can a gpu with lower ram manage that? or it i assume that even to render only one bucket all scene needs to be transferred? so fundamentally, am i “bottlenecked” here, also?
Each GPU gets a tile to work on. A slower GPU may occupy the last tile, which can slow you down. A GPU with not enough RAM will cause rendering to fail, otherwise RAM size has no effect on rendering speed.
In the future, the hard memory limit may be relaxed through developer effort and better hardware support.
For AMD GPUs:
Several users are reporting that they’re able to render scenes larger than VRAM on the latest AMD GPUs (using OpenCL). This is a driver-side feature, not something that Blender controls. In this case the amount of VRAM will have an effect if the scene does not fit.