Eevee: Linux render times for i7-8700K and AMD-FX the same?

Im getting roughly the same render time (with in 5 seconds or so) between these two pc’s and it doesn’t seam right, i would think the i7 (12 thread with 64gb ram) would render a bit (noticeable anyway) faster then the AMD (8 thread with 32gb ram). Everything is stock, no over clocking, Blender 2.90.1 on both PC’s


i7 PC


AMD-FX PC


All PC’s rendered from and to a network drive (a NAS).
Rendered with Eevee, i did peek at Cycles to see if Blender was picking up all the CPU threads and they did, 8 and 12 and then changed back to Eevee.
Render time for each machine was around 5 min 40 second.

Also have a third pc, Windows 10, i9-9900K (16 thread) over clocked to 5.00 GHz, RTX 2070 and that rendered around 40 second (just 40 second, less then a minute).
Should there really be that big of a difference ?

One thing; being linux, i just used stock m/b “drivers” (did install nVidia and Cuda stuff, but it was rendered in Eevee), next gonna see if the boards have any software for linux.

Any thoughts…
Am i off for thinking the i7 should be alittle faster then the AMD and that the i9-9900K should be that much faster ?



Heres some info from $ cat /proc/
This info was gather while the renders where going on.

i7 PC

joe@farmer-i7:~$ cat /proc/cpuinfo

vendor_id       : GenuineIntel
cpu family      : 6
model           : 158
model name      : Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz
stepping        : 10
microcode       : 0xd6
cpu MHz         : 4430.344
cache size      : 12288 KB
physical id     : 0
siblings        : 12
core id         : 0
cpu cores       : 6
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 22
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d
bugs            : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa itlb_multihit srbds
bogomips        : 7399.70
clflush size    : 64
cache_alignment : 64
address sizes   : 39 bits physical, 48 bits virtual
power management:

joe@farmer-i7:~$ cat /proc/meminfo

MemFree:        50227996 kB
MemAvailable:   60170068 kB
Buffers:          114936 kB
Cached:         10394644 kB
SwapCached:            0 kB

joe@farmer-i7:~$ sudo dmidecode --type memory

Getting SMBIOS data from sysfs.
SMBIOS 3.1.1 present.

Handle 0x000D, DMI type 16, 23 bytes
Physical Memory Array
        Location: System Board Or Motherboard
        Use: System Memory
        Error Correction Type: None
        Maximum Capacity: 64 GB
        Error Information Handle: Not Provided
        Number Of Devices: 4

Handle 0x000E, DMI type 17, 40 bytes
Memory Device
        Array Handle: 0x000D
        Error Information Handle: Not Provided
        Total Width: 64 bits
        Data Width: 64 bits
        Size: 16384 MB
        Form Factor: DIMM
        Set: None
        Locator: ChannelA-DIMM0
        Bank Locator: BANK 0
        Type: DDR4
        Type Detail: Synchronous Unbuffered (Unregistered)
        Speed: 2133 MT/s
        Manufacturer: 029E
        Serial Number: 00000000
        Asset Tag: 9876543210
        Part Number: CMK64GX4M4A2666C16
        Rank: 1
        Configured Memory Speed: 2133 MT/s
        Minimum Voltage: 1.2 V
        Maximum Voltage: 1.2 V
        Configured Voltage: 1.2 V

joe@farmer-i7:~$ sudo lshw -C display

       description: VGA compatible controller
       product: GM107GL [Quadro K2200]
       vendor: NVIDIA Corporation
       physical id: 0
       bus info: pci@0000:01:00.0
       version: a2
       width: 64 bits
       clock: 33MHz
       capabilities: pm msi pciexpress vga_controller bus_master cap_list rom
       configuration: driver=nvidia latency=0
       resources: irq:127 memory:de000000-deffffff memory:c0000000-cfffffff memory:d0000000-d1ffffff ioport:e000(size=128) memory:c0000-dffff

AMD-FX PC

joe@farmer-amd-fx-8320:~$ cat /proc/cpuinfo

vendor_id       : AuthenticAMD
cpu family      : 21
model           : 2
model name      : AMD FX(tm)-8320 Eight-Core Processor
stepping        : 0
microcode       : 0x6000852
cpu MHz         : 1405.641
cache size      : 2048 KB
physical id     : 0
siblings        : 8
core id         : 0
cpu cores       : 4
apicid          : 16
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 popcnt aes xsave avx f16c lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs xop skinit wdt fma4 tce nodeid_msr tbm topoext perfctr_core perfctr_nb cpb hw_pstate ssbd ibpb vmmcall bmi1 arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold
bugs            : fxsave_leak sysret_ss_attrs null_seg spectre_v1 spectre_v2 spec_store_bypass
bogomips        : 7032.25
TLB size        : 1536 4K pages
clflush size    : 64
cache_alignment : 64
address sizes   : 48 bits physical, 48 bits virtual
power management: ts ttp tm 100mhzsteps hwpstate cpb eff_freq_ro

joe@farmer-amd-fx-8320:~$ cat /proc/meminfo

MemFree:        17903352 kB
MemAvailable:   28371268 kB
Buffers:          235452 kB
Cached:         10277396 kB
SwapCached:            0 kB

joe@farmer-amd-fx-8320:~$ sudo dmidecode --type memory


Handle 0x002C, DMI type 16, 23 bytes
Physical Memory Array
        Location: System Board Or Motherboard
        Use: System Memory
        Error Correction Type: None
        Maximum Capacity: 32 GB
        Error Information Handle: Not Provided
        Number Of Devices: 4

Handle 0x002E, DMI type 17, 34 bytes
Memory Device
        Array Handle: 0x002C
        Error Information Handle: Not Provided
        Total Width: 64 bits
        Data Width: 64 bits
        Size: 8192 MB
        Form Factor: DIMM
        Set: None
        Locator: Node0_Dimm0
        Bank Locator: Node0_Bank0
        Type: DDR3
        Type Detail: Synchronous Unbuffered (Unregistered)
        Speed: 667 MT/s
        Manufacturer: Undefined
        Serial Number: 00000000
        Asset Tag: Dimm0_AssetTag
        Part Number: F3-1866C10-8G
        Rank: 2
        Configured Memory Speed: 667 MT/s

joe@farmer-amd-fx-8320:~$ sudo lshw -C display

       description: VGA compatible controller
       product: GM107GL [Quadro K2200]
       vendor: NVIDIA Corporation
       physical id: 0
       bus info: pci@0000:01:00.0
       version: a2
       width: 64 bits
       clock: 33MHz
       capabilities: pm msi pciexpress vga_controller bus_master cap_list rom
       configuration: driver=nvidia latency=0
       resources: irq:38 memory:fd000000-fdffffff memory:c0000000-cfffffff memory:d0000000-d1ffffff ioport:e000(size=128) memory:c0000-dffff

Considering Eevee is a raster engine CPU doesn’t play a big part except exporting from blender to the GPU, saving frames, and compositing. Unless you are exporting giant scenes you shouldn’t be seeing large speedups with faster CPUs

Your GPU is also kinda old so the “bottleneck” is more than likely the GPU its self. If you were to test a more modern GPU you might experience more of a difference between the machines.

Thanks.
Is that why the windows machine with the i9-9900K and RTX 2070 renders so much faster (5 mins) ?

Some where along the lines i plan on updating the RTX 2070 and then move that to one of the linux boxs. At that time, one of the linux boxs will have two GPU’s. Im gonna test when the time comes, but would “you” have any recomendations for the combo: Two K2200’s or one RTX 2070 and a K2200 ?
As you(s) might have guessed, this is for a small home render farm.

Eevee doesn’t support multi GPU, you can run 2 instances of blender and some fancy console commands to make use of 2 GPUs but its pretty rough to make work right

If I were to build an eevee render farm id use a multi GPU machine with virtualization (Which I actually already do)

1950x with 64GB of ram and 4 1070s with GPU passthrough you can have 4 Eevee render nodes on 1 machine.

Thanks you’ve been very helpful.
Gonna look into “multi GPU machine with virtualization” as its new to me.
With a system like that can you also render in Cycles and utilize all the GPU’s ?

Cycles can use all GPUs without any weird workarounds. The reason you would want to virtualize to render with Eevee because it lets you run 4 instances of blender fairly easily. Virtualization and GPU passthrough is no easy task however. But once its set up its a good solution until Eevee supports multi GPU

To virtualize I use Proxmox with Ubuntu

You can use all the Gpus and the Cpus on your system. See the CUDA tab in the sytem settings. Watch your bucket sizes since that makes a big difference in render times depending on your compute device layout.

This will work with Eevee ?

No, Eevee only uses your active Gpu card. As LordOdin explained there are ways to run Blender with different Gpu cards (one at a time) for Eevee based render engine.

Gotcha, that’s how i understood it, just wanted to make sure.

Ive used VMware in the past, would the following make sense/work for Eevee and Cycles…

Install Proxmox on my current Ubuntu box and create a virtual OS for each GPU (isolating each GPU per virtual OS).

When rendering with Eevee use muti-virtual machines (an instance for each card) and when wanting to render with Cycles, use the “base” Ubuntu which Proxmox is install on (that has “access” to all cards) to render with Cycles (hope that came out right) ?

Or strip down and install Proxmox on “bare metal”, an OS for each card and also an addtional OS which will have access to all cards ?


Id install proxmox bare metal vm in vm is weird. Im not sure if fx CPUs support iommu something you will have to look in to

Figured i would post this incase others find it useful.

Looking for info on passthrough and iommu

Yes to the following for i7-8700K:

  • Intel® Virtualization Technology (VT-x)
  • Intel® VT-x with Extended Page Tables (EPT)
  • Intel® Hyper-Threading Technology
  • Intel® Virtualization Technology for Directed I/O (VT-d)

Seams like the Quadro K2200 has no problem
but the
ZOTAC GAMING GeForce RTX 2070, looks like people have issues with it (so i might be forced with two K2200’s).

Cant find anything on the ASROCK site about how they passthough stuff and iommu, but people say it works pretty good. Some forum searchs show people talking about where they put the Host or Slave GPU, but nothing about CPU.

Need to learn about the whole Host/Slave thing though. The m/b has onboad vid, dont know if it makes sense for that to be the host and the two extra cards to be the slaves.

All ways looking for insight.

Nvidia doesnt allow GPU passthrough on VMs but proxmox allows you to lie to the VM so it thinks its a real machine. Generally why people get error 43

AMD GPUs should have no problem though, If you dont want to use cuda that is

If i do… do this, i want to make it as versatile as possible. When i replace and move that GeForce RTX 2070 to another machine, id want to beable to use cuda with it. Will have to research more, but i thought i seen people said that Quadro’s worked fine… probly mis-read.

To clarify for anyone reading Quadros will work fine, Nvidia detects if you are in a VM when installing GeForce drivers and wont do it if you are on RTX or GTX. That’s why you need to lie to the VM

1 Like

How are you getting the GPU passthrough? I looked into that a bit, but could not seem to make it work. What VM software are you using? Is it linux, windows?

Sorry for the multiple questions. Hope you don’t mind. :slight_smile:

obviously @LordOdin can give more details, but he uses proxmox. The bit of research i did, vmware yanked gpu passthrough if that helps any.

Thanks @SidewaysUpJoe, scrolling back through the posts, I see he did mention that. I must have blown by it before; thanks for the quick reply. It’s much appreciated.

I haven’t used VMWare in a while, but as you say, it appears they and Virtualbox have removed the pass-through feature. Nice to know Proxmox is still in it for users and performance.