Eevee Multi GPU

’ Give someone an inch and they’ll take a mile’
I have a GTX1080Ti. It would be nice to make 4x 1080Ti useful in Eevee…
What’s more - I think it’s a necessity to make Eevee a real groundbreaker.

1 Like

multi gpu rendering is significantly more complicated, requiring a large amount of developer effort for a small group of users with exotic hardware configurations.

1 Like


You are totally WRONG. Multi GPU (on site or render farm) is absolutely fundamental for large Cycles projects.

1 Like

For cycles, Yes indeed. But breaking a frame into tiles is much easier for an offline renderer like cycles. For a realtime renderer, that is a lot more complicated.


There seems to be a general misunderstanding about how this works:
Multiple graphic cards CANNOT work together to render a single frame using rasterization (which is what Eevee uses).
This only works for cycles because the calculation process (via CUDA/OpenCL) is broken down into fragments and each card calculates it on it’s own!
That’s why you see different parts of the render image start/finish at different times. This is NOT possible for rasterization.

Eevee always uses the graphic card to which your monitor(s) are connect to via cable(s).

The only way you can use 2 cards at the same time is by using SLI (or AMDs equivalent) and that requires 2 (or more) compatible cards connected via a special cable. The GPU driver then spreads out the work onto these cards. OpenGL (graphics API Blender uses) is actually unaware of this. So basically this nothing the blender developers can ‘program’, it’s something done by the hardware/driver if you have the correct setup for it.
Just having 2 or more cards installed is not enough as it is for cycles.

1 Like

It seems that you are wrong:

“Unlike SLI and Crossfire of old, where the task of divvying up the rendering between GPUs was largely handled by the driver, this support gives control to the developer.”

Dear Awesome Blender Developers:
Go to the MultiGPU for Eevee and change the world much more. Yes you can.

1 Like

Seeing the limited budget and what else they have left to complete, this is probably not going to happen anytime soon.

Also Vulkan is NOT supported in Blender or Eevee. OpenGL is. That would again require another update/rewrite of the render engine, and would drop off a lot of hardware support as well.

Like mentioned above, this will require developer to work on this, which is something that is of limited resource.

Of course Vulkan is not supported now but I think it is a must. OpenGL has no future.
Any graphic card which is usable in Eevee would work with Vulkan so no “drop off a lot of hardware”. I’d say more: transition to Vulkan/Metal would increase the performance on any reliable graphic card. So no drop off at all.

Well, don’t hold your breath. Transitioning off of openGL would require about as much development work as the 2.8 transition is taking now.

I would be surprised to see a vulcan version of blender within the next 3 years.

1 Like

I know its not easy but its exciting. 3 years sounds not bad and 2 years would be really great.

Dear Awesome Blender Users:
Go learn coding for Blender and change the world much more. Yes you can.


For what it’s worth, I brought this up early when they first started discussing improvements to Blender’s viewport render (pre-Eevee) since I regularly do large render jobs using the current OpenGL renderer and would like to take advantage of multiple GPU devices (even the ability to specify a single GPU device from the command line so I would just launch four instances of Blender; one per GPU). The developers are aware that this is a desirable feature and are interested in finding a way to support it. The current complication is that in order to do an OpenGL render, you typically need an OpenGL context… that usually only happens if the GPU device is drawing to the screen; and usually only one device is dedicated to that task.

TL;DR the devs are aware and want this feature, too; it’s just not easy to do.


So poentially each GPu connected to seprate screen, would give the context. Or for example if a screen has mulit input connect other cards to the inputs that won’t be used… Still hope they figure out a better solution.

I connected monitors to a different graphic cards and fired up two separate instances of Blender but still just one (main) graphics card is under load in Eevee. It is not what I expected. Perhaps it depends on the OS (my is macOS) - I wonder how it works in Linux or Windows.

1 Like

If they port Eevee to Vulkan then yes it should be no problem, don’t spect that to happen soon.

Some solution for now.

Errrrm, nope. He’s on a MAC.

If a human can start multiple instances of Blender to speed up animation processing time, then why is Blender not able to render two subsiquent frames of an animation simultaneously???

I mean seriously, if a human is able to open two or more instance of Blender for each GPU, then why can’t Blender just be a little smarter and fire up two threads to render two frames at once, it would be perfectly linear too (2x GPUs == 1/2 the time)?

Who gives a hoot about stills in EEVEE anyways? Not me, EEVEE is great for animations, but is severely handicapped without any ability to scale using multi-gpu systems!!

My suggestions would be to solve this by using as many GPUs as are available to render as many concurrent frames as you can, thus blazing through animations, instead of dragging all that extra GPU power as dead weight…

It would be extremely beneficial for the entire animator community if someone would please add features to Blender allowing multiple GPUs to be used for rendering multiple frames at once during an EEVEE animation render.

It’s so simple, Please consider!!!

I agree it sounds easy as you put it.
I am not the most experienced coder around but I know that threaded code in C/C++ is a gnarly beast. Our peeps in Amsrerdam may very well be smart enough to pull threading of but I believe there is more to it.
For now, the mentioned workaround relies on a setting in the NVIDIA driver which is only available in Windows. I don’t know if that feature can be used for threaded workloads. Or is exposed in an API to be used programaticaly in the first place.
For now there are said batch scripts. IMO a feasible solution.

1 Like

Well, I do know the whys and wherefores of computer programming well enough to know that this would very considerably add to the complexity of the software, and slow it down (slightly) in the most-probable use case, only to support an “exotic” hardware configuration that most people do not have. The entire problem that the computer is being tasked to solve must be, as we say, “hardware parallelized.” But if, most of the time, there is no “parallel” hardware available on which to use it, you haven’t gained anything for the complexity that you would nonetheless incur.

Therefore, I am of the opinion that this is a technically reasonable and appropriate software design decision for this product.