I’m running Linux Mint 18 with KDE5 on a desktop PC.
lspci | grep -i vga
00:02.0 VGA compatible controller: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor Integrated Graphics Controller (rev 06)
01:00.0 VGA compatible controller: NVIDIA Corporation GM206 [GeForce GTX 960] (rev a1)
I’m trying to get the Intel GPU to handle the display, while using dedicated nVidia card for CUDA only.
I’ve switched the Intel as primary display in the BIOS, and plugged my monitors into the motherboard.
I’ve modified my X.org config to use “intel” device instead of “nvidia”, leaving “nvidia” device unused by the X.
I kinda seem like got it to work, but there’s some things that make it really weird:
When X.org starts, Plasma Desktop warns that the GPU doesn’t support OpenGL. Blender won’t run - no GLX extension found.
If I switch to Intel GPU from the nVidia Optimus panel, the Plasma and Blender work, but my thermal widget doens’t report Temerature for nVidia anymore.
I tested out CPU vs GPU rendering in Blender and the GPU renders 3~4 times faster than CPU so I guess CUDA is really working.
If I switch the nVidia Optimus back to nVidia GPU, I can’t run blender anymore (without loggin out! it’s the same X.org session!) but the thermal widget displays GPU temperature again!
Also a strange thing happened. I have a project with a quite complicated groups of meshes in there (the scene has 25,000 verticies) when I select different objects, Blender hangs for a moment, sometimes it’s 10+ seconds. When working with a scene that doesn’t have as much geometry, all works fine. Could this be the limitations of the Intel GPU, or could this be a bug?
Even switching selection between Empty objects takes like 500 ms, while it always just worked instantly. It’s quite random. Even a scene with 500 verticies lags a bit when selecting objects.
I wrote above paragraph, before I realised I was renering with CUDA in another blender process, so the lag was probably caused by this - however still puzzles me why - even turning niceness all up to 25 for the CUDA-rendering process doesn’t get rid of the selection lag in another blender instance. However it’s still much better than trying to work when using single GPU for both CUDA rendering and display.