Are e-gpu worth for blender?

I can’t buy a desktop because i travel a lot, i use my laptop with an rtx 2080 max-q only for blender , i mainly do 4k animation render with it… take time but i make only 1-2s animation loop.

i want buy an rtx 3090 with a e-gpu will it work for blender? will i have the same speed as a desktop?
i don’t care about gaming i just need it for blender.

I do not own one but if you are willing to spend that much, I recommend you to either to put that 3090 in a small form desktop as a render slave that you can pass renders remotely, or look into render farm solutions.

EGpu will definetely have a performance drop in Blender. Unlike rendering in games, rendering in Blender is a bit more complex, especially with Cycles. However it might be slightly different with Eevee.

Also take a look at OpenData (the search data section), maybe someone did tests with an EGpu.

.
Short answer TL:DR
Disclaimer: I’m not up to date on eGPUs or the new Thunderbolt 4, but here’s what I know.

SPEED/PERFORMANCE:
eGPU’s are slower than desktop GPUs,
but the answer might be more complicated than that and beyond my knowledge (explained in the more detailed answer below)
I’ve read that you can expect about a 20% performance drop if your eGPU is hooked up to an external monitor. (Not sure how much Blender is affected).
It’s even worse on laptop screen: Up to 50%. I’m guessing that carrying an external colour-accurate monitor around was not what you had in mind :frowning: But that’s from a 2017 post.

ANYONE KNOW ANYTHING ABOUT THUNDERBOLT 4?
TB4 (Thunderbolt 4) is new. TB3 is what most eGPUs are right now afaik. Maybe you want to wait for TB4 Enclosures tho. I have no idea. Apparently it’s the same speed???

CHOOSING A LAPTOP:
Be careful when choosing an ultrabook laptop! There’s different types of thunderbolt (TB1, TB2, TB3, and now TB4), and some thunderbolt are slower!
Also, eGPU performance may depend on how recent the CPU is and maybe other stuff too!

IS AN EGPU REALLY THAT CONVENIENT?
Also, modern eGPU might not be as plug-and-play-simple as you think and might require some troubleshooting.
So read up on https://egpu.io/best-egpu-buyers-guide/ and watch YouTube reviews to see what you’re getting into.

More Detailed Answer:
Knowing about PCIe speeds and Thunderbolt 4, 3, 2 and 1 speeds might help understand how much slower an eGPU is compared to a desktop.
(Also, I’m pretty sure laptop GPUs are slower than their desktop versions in general.

Here’s a helpful PCIe Speeds Chart
[https://www.deskdecode.com/wp-content/uploads/2020/02/PCIe-Version-difference.jpg]
BUT note that 1 GB/s (i.e. 1 GigaBYTE/s) does NOT equal 1 Gbps (i.e. 1 GigaBIT per second).
So use an online Gbps to GB/s converter
PCIe has two numbers. The "#.0 one and the “x##”. The higher both numbers are, the faster the PCIe.*

TB3 (Thunderbolt 3) and TB4 are the fastest afaik. (M.2 is, too, but I think that’s a DIY cut-a-hole-in-your-laptop’s-case kind of stuff you probably don’t want). *
**A
pparently, TB4’s the same speed as TB3 afaik? Does anyone else know what advantage a TB4 eGPU enclosure would/will have??*

*RTX 3090 bus speed is PCIe 4.0 x16, *
[https://www.videocardbenchmark.net/gpu.php?gpu=GeForce+RTX+3090&id=4284]

  • which is 31.5 GB/s, *
  • which is the same as 252 Gbps. Much faster than eGPU’s 40 Gbps*

But most GPUs aren’t that fast. They only have a bus speed of PCIe 3.0 x16.
[https://www.videocardbenchmark.net/gpu.php?gpu=GeForce+RTX+2080&id=3989]
Which is 15.8 GB/s, or 126.4 Gbps. But that’s still much faster than TB3’s 40 Gbps.

But I recently learned that, for example, most AM4 desktop motherborad chipsets (from 2017 to 2020) don’t even have that full PCIe 3.0 x16 speed. The only one that has that speed is the high end X570 chipset (it’s actually the faster 4.0 PCIe 16x)
[https://en.wikipedia.org/wiki/Socket_AM4#Chipsets]
So have some people been using PCIe 3.0 16x GPUs at partial speed for a while? Or do most GPUs not reach that speed? I don’t know why, but i noticed that lots of GPUs have “Bus speed PCIe 3.0 16x” on videocarbenchmark.net.

Hopefully someone here who knows about GPUs in general can answer.

But yeah, eGPU looks cool and convenient. For a while, I really wanted one. But it’s extra cash for lower performance. Hopefully it’s as convenient as it seems, or will be in the future.

1 Like

I just picked one up on e-bay: fantastic! (for rendering, no noticeable differance for the UI)
I’m using it on a laptop with an internal GTX 1060/6GB, with an old GTX 1070 in the external box.

Not exactly a laboratory test, but one of the scenes II’m working on was taking 24 minutes or so a frame on my laptop (5 odd mins a frame on my 16 core workstation with 2 x 1080tis, 8 or 9 minutes a frame on another 12 core machine with 1 x 1070ti & 1 x 1070).

The e-gpu dropped this down to about 13 minutes a frame on the laptop from 24 odd, I’d say it’s well worth it!!!

All the above rendered in 2.92 Beta - so I get (and am using) hybrid GPU+CPU with Optix, and Optix supports more shaders than 2.91.2. None of my machines have RTX cards obviously, but Optix is still way faster than CUDA usually from what I’ve found.

Obviously with most thunderbolt3 e-gpu’s sporting a 4 x pcie lane there’s a bottleneck, and I have to disable it for Davinci to run at all properly, but if you google it, redering does not usually need all the pcie bandwidth, only really using it for loading the scene and saving it.

With simple scene’s there’s not as much benefit, as in if it only takes a short time to render then a greater percentage of the time is loading to VRAM, then reading from VRAM, so the pcie x4 is more of a factor, but for anything taking a couple of minutes or so; it’s a huge benefit.

Added bonus: if you render Eevee from the command line, you get twice the throughput: one process per GPU, like I can on my full size multi gpu boxes…