Cycles and Optimus

Has anyone tried using cycles with Nvidia Optimus? I was just thinking if I set Blender to use the internal GPU and then set cycles to use the Nvdia card I could potentially run a 2 GPU set up on my laptop. The internal should be good enough to run the UI. Anyway just wondering if someone has tried this. I’ll test it on my lunch.

I tried running the official Blender 2.62 build on my netbook with Atom D525/ION2 + Optimus, but ION2 only has compute capability 1.2, and we need 1.3 or better for Blender. IIRC, some person/people got it running on ION2 with a Graphicall.org build, though, using experimental and OpenCL or something.

I tried it on my rig, and just got an error message when it tried to render. I set blender to use the Intel chip which works fine, and then set cycles to use the Nvdia GPU… but like I said it just came back with an error.

I’ll check out the other thread as well thank you.

How does one set the GPU thats used for the UI in these two-GPU scenarios?

When running Blender with nvidia card cycles functions, with intel same as Meteor_Freak, a CUDA error when rendering…

In Optimus I set blender to run on the intel chip. Then in blender I tell Cycles to use the Nvdia card. It could be a Nvidia thing… Optimus might not allow this. I thought it might work since you can run 2 programs at the same time one using the intel chip the other using the Nvidia chip, but I don’t know the finer workings of optimus and how it achieves this.

Using the nvidia card to render with OpenCL strangely enough does work (although crippled) while using Intel for 3Dview. So it IS possible to use both video chips at the same time.
Trying CUDA though gives error:
mesh_ensure_tessellation_customdata: warning! Tessellation uvs or vcol data got
out of sync, had to reset!
CD_MTFACE: 0 != CD_MTEXPOLY: 1 || CD_MCOL: 0 != CD_MLOOPCOL: 0
CUDA error: Unknown error
CUDA error: Invalid context in cuMemAlloc(&device_pointer, mem.memory_size())
CUDA error: Invalid context in cuMemsetD8(cuda_device_ptr(mem.device_pointer), 0
, mem.memory_size())
CUDA error: Invalid context in cuMemAlloc(&device_pointer, mem.memory_size())
CUDA error: Invalid context in cuMemcpyHtoD(cuda_device_ptr(mem.device_pointer),
(void*)mem.data_pointer, mem.memory_size())
CUDA error: Invalid context in cuGraphicsGLRegisterBuffer(&pmem.cuPBOresource, p
mem.cuPBO, CU_GRAPHICS_MAP_RESOURCE_FLAGS_NONE)
mesh_ensure_tessellation_customdata: warning! Tessellation uvs or vcol data got
out of sync, had to reset!
CD_MTFACE: 0 != CD_MTEXPOLY: 1 || CD_MCOL: 0 != CD_MLOOPCOL: 0

CUDA works fine when only using nvidia for everything…

I see similar behavior. I suppose we can wait for OpenCL performance to improve but it would be nice to get CUDA working in a dual GPU laptop scenario. Any suggestions as to where to go from here?

BTW – As if by magic, Intel Ivy Bridge w HD 4000 integrated GPU specs announced. On chip GPU OpenCL 1.1 supported. Here comes Blender on Ultrabooks…
http://www.engadget.com/2012/04/23/intel-ivy-bridge-core-i5-i7-quad-core-processors/