Multi Graphics card support without SLI ? utilizing DirectX 12

Please review this video. No SLI or Crossfire needed to utilize many heterogeneous graphics cards on a system.
But Blender does not use DirectX

Any news on the latest rendering with GPU and CPU simultaneously? Will it be a 2.80 feature?

Neither CUDA Nor OpenCL require crossfire or SLI to render in cycles. I’m not sure how the viewport handles multiple GPUs though…

edit: It appears that Blender doesn’t use more than one GPU in the viewport

GPU/CPU is already available in 2.79 buildbots. Multi GPU rendering is also not the latest news. In Blender SLI and Crossfire will actually slow you down.

So can you direct me to such a build (cpu&gpu render)?
Is it stable? Any restrictions or limitations?
Why is it not official release?
Ty

Here is the link to the buildbot https://builder.blender.org/download/

check the ‘cycles compute device’ under the system tab in the user preferences, your CPU should show up as an option in recent builds, check the checkbox to add it to rendering devices and it should work.

Limitations:
-only works for f12 rendering, viewport rendering only uses GPUs
-CPU rendering still performs slowly with large tiles, thankfully GPUs now perform well with small tiles, so 16x16 is a good tile size.
-If you are using denoising, it currently slows down a lot with small tiles, so I’ve had the best luck with 32 or 64 px tiles when denoising. (though lucas recently committed a change which should reduce this slowdown a fair bit, if you wait a day or two to download the Buildbot version it should be in there)

It isn’t in the official release because it is too new. The code was committed after 2.79 was released. There has been some talk of possibly including it in a 2.79a release sometime around the end of the year.

Personally i have four Rx Vega64 and rendering on GPU look faster, just comparatly to my old sli gtx 680 i dont know why but OPENCL look long to build the first render.

Otherwise for GPU + CPU, doenst look interesting because if you have like me very strong GPU you lost time by CPU, require little tile and that is bad.

The best way is to add custom render tile for GPU and for CPU separatly, like that all work as good speed, but i dont know if the developper have made that will change it.

kengi, have a look at this thread: https://blenderartists.org/forum/showthread.php?439146-2-8-Branch-update-with-Cycles-CPU-GPGPU-rendering-together/page3 (title is a bit misleading, since all ofthis is in 2.79.1 already) there are all the infos you can ask for. Also the latest news is that Lukas stockner just committed code to make 16x16 tiles faster on GPU. So you might be able to squeeze even a bit more out of your box. Just test it.

Thank, i will see that

Thanks @SterlingRoth, @m_squareGFX, @kengi,

I use Blender for Archviz, producing still shots (not videos). Those two points above, are big disappointments to what I was hoping for:

  • Checking my lighting, materials, camera angles is important while in the viewport. Performance of F12 renders is less important since I will typically produce 4-10 chosen cameras for presentation, final rendering those are no big deal, with the improvement of hardware and cycles performance (especially denoising); for example turning a 10 minute render to 8 min, is welcome but not critical for a few shots. Fast viewport previews is very desirable!
  • Denoising was for me THE greatest performance gain for Cycles. Not including that in a render optimization and improvement is crucial. I did read the reasons why this is harder to do and about that work now being done to amend. Hope it all turns out well ! and thanks to Lucas(?) and all who work on this.

To my surprise, I was able to render with 3x GTX 1080 Ti cards. Actually it’s a mining rig for mining cryptocurrencies.
It is using risers. Cannot play any game with it, I think because those risers have just a few pins. But rendering with Blender goes well. Three at the same time.

I don’t know if BF wants to commit resources to DX12. It would not be compatible with Linux or Mac. Vulkan would be better, but that would break compatibility with older cards. Only newer cards can run Vulkan. However, Eevee on Vulkan that support multiple GPUs would be amazing. On Mac, i think Apple wants Metal to be used instead. It’s complicated, I think ideally everyone should be on Vulkan and OpenCL.

Updating the codebase for 2.8 from ogl 2.x to 3.x was a huge task that took many people many months to complete, doing dx12 backend will probably take longer, only benefit windows 10 users, and on top of that only benefit windows 10 users with multiple GPU’s. While at the same time making maintenance more difficult.

I’m pretty sure there’s better places where the devs can spend their time.

Mmm, a little confused here. When I read the thread title I thought this was related to viewport/Eevee… But csimeon seems to be referring about CUDA/OpenCL render in Cycles. How could DirectX influence Cycles rendering?

It seems there’s a little confusion on what some of these terms mean and what they do.

DirectX 11 and OpenGL are high-level APIs to draw 3d graphics, with DirectX being windows only.

Crossfire and SLI are 2 terms that mean essentially the same thing. They take draw calls from a DirectX 11 (or older) program and at a driver level separate them across multiple graphics cards. This means there is a high driver overhead and reduced scaling. Having 2 cards doesn’t give 2x the performance, more like 1.3x to 1.6x.

OpenGL is not compatible with either of these as it would require AMD and Nvidia to support it.

Vulkan and DirectX 12 are low-level graphics APIs that allow the use of multiple GPUs within the program itself without any driver trickery and therefore makes Crossfire / SLI obsolete. Games that have utilized this on DX12 have got near 100% scalling. Vulkan multi-GPU support is coming soon.

Its worth mentioning here that DX12 is Windows 10 only and therefore im 99% sure it’ll never been included in Blender. Apple do not currently support Vulkan however MoltenVK is library that converts Vulkan into Apple’s API.

Blenders viewport uses OpenGL so does not support multiple GPUs. Blenders renderer (Cycles) uses either CUDA or OpenCL. These are very different to OpenGL and DirectX and support multiple GPUs.

Because of this they only part of Blender that isn’t using all your GPUs is the viewport. To do this the viewport would require changing from OpenGL to Vulkan. Vulkan support has been mentioned a few times as ‘maybe in the future’, but then you see the developer who said it run away quickly.

CPU + GPU rendering is now a thing, so the more of either is a good thing but again, this is due to the properties of OpenCL / CUDA and will not work with DirectX, OpenGL or Vulkan.

For those of you that know more about this subject, yes i have glossed over a few bits but i fear explaining it deeper would only add to confusion.