Problems using Intel HD for display + nVidia for CUDA

I’m running Linux Mint 18 with KDE5 on a desktop PC.

lspci | grep -i vga
00:02.0 VGA compatible controller: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor Integrated Graphics Controller (rev 06)
01:00.0 VGA compatible controller: NVIDIA Corporation GM206 [GeForce GTX 960] (rev a1)

I’m trying to get the Intel GPU to handle the display, while using dedicated nVidia card for CUDA only.
I’ve switched the Intel as primary display in the BIOS, and plugged my monitors into the motherboard.
I’ve modified my X.org config to use “intel” device instead of “nvidia”, leaving “nvidia” device unused by the X.

I kinda seem like got it to work, but there’s some things that make it really weird:

When X.org starts, Plasma Desktop warns that the GPU doesn’t support OpenGL. Blender won’t run - no GLX extension found.

If I switch to Intel GPU from the nVidia Optimus panel, the Plasma and Blender work, but my thermal widget doens’t report Temerature for nVidia anymore.

I tested out CPU vs GPU rendering in Blender and the GPU renders 3~4 times faster than CPU so I guess CUDA is really working.

If I switch the nVidia Optimus back to nVidia GPU, I can’t run blender anymore (without loggin out! it’s the same X.org session!) but the thermal widget displays GPU temperature again!

Also a strange thing happened. I have a project with a quite complicated groups of meshes in there (the scene has 25,000 verticies) when I select different objects, Blender hangs for a moment, sometimes it’s 10+ seconds. When working with a scene that doesn’t have as much geometry, all works fine. Could this be the limitations of the Intel GPU, or could this be a bug?

Even switching selection between Empty objects takes like 500 ms, while it always just worked instantly. It’s quite random. Even a scene with 500 verticies lags a bit when selecting objects.

I wrote above paragraph, before I realised I was renering with CUDA in another blender process, so the lag was probably caused by this - however still puzzles me why - even turning niceness all up to 25 for the CUDA-rendering process doesn’t get rid of the selection lag in another blender instance. However it’s still much better than trying to work when using single GPU for both CUDA rendering and display.

What is the iGPU model which has your CPU?

Regarding to monitor the temperature:
https://devtalk.nvidia.com/default/topic/876441/monitoring-nvidia-gpu-when-intel-igpu-as-primary-display/?offset=3

A year and they have not even answered. It would be good that someone else will join to the thread and add to the request.

I reported this issue on nVidia formus:
https://devtalk.nvidia.com/default/topic/970144/linux/intel-for-display-nvidia-for-cuda-optimus-bug-/

(and also bumped your thread too, YAFU)

I’m not sure, the Intel CPU is an i5, but reports as Xeon in the lspci output.

Thanks.

With “cat /proc/cpuinfo” from terminal we can know the CPU model and therefore the iGPU by searching on internet.

Regarding the problems you have in Blender using iGPU, you see if changing any of this help (mainly OpenGL and Window Draw Method):
https://www.blender.org/manual/preferences/system.html

You be careful in the changes and make those changes one at a time to know exactly if it improves or worsens the situation. For example time ago and without reading the documentation I had chosen “Full” to Window Draw Method by mistakenly thinking that “Full” was the best. And no, it is quite the opposite, it is the worst.

Edit:
I was referring to what you said in that paragraph about the lag. But if it was because the other Blender instance running at the same time, then do not modify anything in the configuration of Blender.

Edit 2:
Regarding problems you have when go back to nvidia from nvidia-settings. I do not know if I understand correctly.
The last time I had tried with recent drivers did not need to edit xorg.conf manually. When you choose nvidia or intel iGPU from nvidia-settings GUI, it automatically creates a new ‘xorg.conf.xxxxxxxx’ on “/etc/X11”, where xxxxxxx are numbers. If you are having problems, for go back to the original state you just manually delete all these ‘xorg.conf.xxxxxxxx’ files, then edit xorg.conf for nvidia. And you may need rename the hidden folder “/home/YOUR_USER/.nv”. Restart the system and select nvidia (PCIe) as the primary display in BIOS.
(I think in Ubuntu you do not even need to have a xorg.conf file, but I’m not sure if this is so in Linux Mint)

I did a quick test with recent drives 370.28 in Kubuntu 16.04. When intel iGPU is selected from Nvidia-Settings GUI from prime menu, after reboot GPU compute is not available in Blender. Then when I have more time I will analyze the problem.

Anyway, my HD 4000 is working really well in Blender. The only problem I found is that if in System preferences on “Selection” > “Automatic” is chosen, then the selection of objects in Viewport is very laggy. I have chosen “OpenGL Occlusion Queries” and works well.

Edit:
I have found that “OpenGL Occlusion Queries” is not very appropriate when you have overlapped objects, for example when you use bones with x-ray.

@unfa, I have been experimenting again. I saw your report on nvidia forum. It seems that your xorg.conf is not set for intel. You try to edit the xorg.conf similar to the second I posted here:
https://devtalk.nvidia.com/default/topic/876441/linux/monitoring-nvidia-gpu-when-intel-igpu-as-primary-display/

but that will not solve the temperature monitoring anyway.
If you can not make nvidia work as the primary display again, you try to do what I had explained before in previous messages (in the Edit 2)