I have two swappable HDDs
On first one I have Blender 2.78 running on Ubuntu 16.04.
And on the second one it’s Blender 2.78 on Windows 8.1.
I also have two GTX1070s.
On Windows I get simple options similar to:
- GeForce GTX 1070 (1)
- GeForce GTX 1070 (2)
- GeForce GTX 1070 x2
However in Linux the options are totally different (and confusing):
- GeForce GTX 1070 + (Display)
- GeForce GTX 1070 (Display)
- GeForce GTX 1070
I have tried all three settings and I get different results for each.
To test I was using the rendered mode viewport with a default cube scene and would then just rapidly pan and zoom.
By issuing the command “nvidia-smi” I get snap-shot statistics of my GFX cards.
By issuing the command “watch -n 0.5 nvidia-smi” I get real-time updating statistics.
So, if I select the compute device: GeForce GTX 1070 + (Display)
I see that both card’s memory usage ramps to 2073mb and GPU utilization peaks around 40%
When I select compute device: GeForce GTX 1070 (Display)
Both card’s memory usage ramps to 1360mb and GPU utilization peaks around 40% (seems to peak less frequent).
When I use compute device: GeForce GTX 1070
Both card’s memory usage ramp to 1360mb.
Card 1 = GPU utilization steady 24%
Card 2 = GPU utilization peaks ~40%
The responsiveness of the rendered viewport doesn’t appear to change much (if any) regardless of which options was selected, but it was a very simple scene.
I assume that the option “GeForce GTX 1070 + (Display)” is the one which fully utilizes both card’s memory and GPUs, but I’m still not quite understanding what those “(display)” options actually imply. Anyone have an explanation?