CPU Wars; Intel's new 18 core monster and X-series chips

If Blender doesn’t improve it’s multicore performance(viewport playback, softbody, cloth, particle physics, etc), more cpu cores seems like a waste. Get an intel 7740k and overclock it to 5Ghz.

A lot of people use Cycles render engine to render their stuff.

Where is Beerbaron ?

OK i’m Out.

You’re right. This thread must not continue without me.

I feel like Skylake-X isn’t going to be that bad, but people want it to be bad. Threadripper isn’t going to be that great, but people want it to be great.

All the EPYC performance numbers I’ve seen (from AMD) do their best not to put like processors next to each other (i.e. 16-core vs. 16-core), rather they emphasize the price advantage. I’m skeptical that AMD really is going to deliver a 3+Ghz (on all cores) 16-core Threadripper. Also, perhaps quite irrationally, I don’t like the idea of missing out on AVX-512.

PCIE CPU CARD! It’s an x16 gen3 AM4 socket PCIE card, compatible with RYZEN cpus!
Treadripper socket SP3 PCIE CARD anyone?

FORGET OPENCL ON AMD GPUs!!!
RYZEN CPU RENDERING ON PCIEs!!!

does blender use “AVX-512”

Not directly (as of now) and I doubt there will be much adoption for it in general (hence: “irrational”).

However, Embree (which Cycles uses to some degree) is written by Intel and they did put some AVX-512 optimizations in. Also, I would expect the autovectorizers in the major C++ compilers to support it. AVX-512 also isn’t just about extra-extra-wide SIMD, it makes programming more flexible. Ryzen (and Threadripper) also support AVX-256 only at half speed.

Having said all that, Cycles is not the kind of application to see spectacular speedups from extra-wide SIMD, especially the way it is written now. Maybe as an OpenCL device there would be some more speedup, because then I assume the code will be transformed to be more SIMD-friendly.

I mostly agree with BeerBaron, aside from the fact that price to performance does matter. If I compare two different processors that cost equal amounts I am going to go with the better performer…in this case AMD. I will possibly be making the switch myself…I am just waiting for intel to do a propper response to AMD…ATM I do feel like they rushed something out just because they did not expect much from AMD…I know intel can do better, but they have gotten fat & lazy…I do not doubt their next chip will be a winner…when I say next I am referring to completely new architecture…but what do I know, I’m just a doctor.

Of course, but it’s not that clear-cut. Like I said, they’re not putting the CPUs side-by-side, so I can only deduce that per-core performance will be quite inferior on the AMD CPUs. All other things being equal, I’d rather have less cores that are faster (in a workstation), because the vast majority of applications (multi-threaded or not) is limited by single-core performance in some way.

I will possibly be making the switch myself…I am just waiting for intel to do a propper response to AMD…

Skylake-X is the response. Maybe they’ll adjust prices a bit, but I have a feeling that the 10-core Skylake-X is already fairly competitive with the 16-core Threadripper, just like the 6-core Skylake-X is fairly competitive with the Ryzen 1800x (in both price and performance).

Skylake-X is the response. Maybe they’ll adjust prices a bit, but I have a feeling that the 10-core Skylake-X is already fairly competitive with the 16-core Threadripper, just like the 6-core Skylake-X is fairly competitive with the Ryzen 1800x (in both price and performance).

do you have some benchs in rendering app ? (vray corona cycles arnold …) i’m quite unlucky on google.

EDIT : i’ve find something …

EDIT : i7 7820k is Arround 156 dolllars more expenssive than the ryzen 1800X for similar performance and they are both 8 core CPUs

Do tell…

The 32 core part has an all core turbo of 2.7 GHz.

Is Embree compiled by intel or by developers in their programs. If it compiled by Intel then it want matter as Intel compilers will not run optimised instruction sets on AMD CPUs.

I don’t know about Threadripper but Anandtech just did a comparison of the new Xeon Sp and Epyc and AMD did fairly well. In large database type workflows Intel is still better due to the way cache works in EPYC, but in certain areas especially floating point heavy loads AMD beat Intel (renderers etc.). So it really depends on what you are trying to do. On top of that AMDs stuff is a bit cheaper overall.

Meh on the AVX-512 also it has a disadvantage where the Intel chips have to clock down significantly to use it.

Either way we at least have some competition now. Let’s see where both of them are at next year.

http://www.anandtech.com/show/11544/intel-skylake-ep-vs-amd-epyc-7000-cpu-battle-of-the-decade

X2 EPYC 7601 (32 cores) $4200 vs X2 Xeon 8176 (28 cores) $8719

@Beer-Baron: they are putting them on a price scale as far as I can tell. idc about the internet comparisons…most of them are pro AMD tbh…I still have about a year before I upgrade…I’ll decide who I go with then.

Isn’t taking care of that issue what the infinity fabric is for? That’s not to mention that Intel also made changes to their data busing system to make higher core counts work better (using a grid based system rather than a loop).

I would think that a major reason why clock speeds always went down with more cores is not just the heat produced, but because of the way the data was sent around the chip.

The 7820X (8-core) beats the the Ryzen in practically everything. The 7800X (6-core) beats the 1800x in some things, loses in others. It also costs less.

There are no two CPUs you can directly compare for either price or performance, but in relative terms, Intel did deliver CPUs that are fairly competitive on price.

The 32 core part has an all core turbo of 2.7 GHz.

The 16-core part doesn’t clock beyond 3Ghz at all, which makes me concerned about what Threadripper will do.

Is Embree compiled by intel or by developers in their programs.

There’s two ways to use AVX/SSE:

  • explicitly via intrinsics (or asm), then the compiler will emit those instructions for you directly (Embree does that)
  • the compiler figures out a way to automatically create AVX code from your non-AVX code (autovectorization)

Both features are supported in all major compilers, though some compilers (GCC or Intel) may do a better job at autovectorization.

If it compiled by Intel then it want matter as Intel compilers will not run optimised instruction sets on AMD CPUs.

For the official Blender build, all dependencies are compiled by GCC on Linux, MSVC on Windows and Clang on Mac. The Intel compiler is not used. As far as I know, Intel stopped crippling AMD cpus with their compiler output (that was a bit of a scandal).

EDIT: Had some comparisons here, but mixed up the CPUs. Not gonna do it again.

Meh on the AVX-512 also it has a disadvantage where the Intel chips have to clock down significantly to use it.

It depends on the workload/stress, but when you get up to four times the throughput over SSE, a little hit on the clocks shouldn’t hurt your ego.

Mind that unlike the EPYC CPUs, the Xeon 8176 scales up to 8 sockets. The Xeon E5-2699V4 (which actually beats the 8176 here) only scales to two sockets and only costs 4100$.

Don’t believe everything you read, this old piece of advice rears its head again with a supposed “leak” of Threadripper’s performance

Right away, a lot of Intel fans got excited at the prospect of Threadripper becoming Bulldozer round 2, if said benchmark was actually real. Not only is the reported socket type wrong, but it would’ve suggested AMD doubled the cores (and the price) and somehow made a chip that is actually worse than Ryzen standard (marginally better multi-core and much worse single-core).

There is still one more unknown regarding Ryzen’s performance, and that is how we don’t have a lot of clear-cut benchmarks as to the effect of the new BIOS update when fast RAM is used (most of the pre-built Ryzen machines now don’t use RAM sticks over 3 Ghz). The general idea that is out there is that it would speed up Ryzen far more significantly than a Kaby Lake (and will it allow the chip to beat Intel’s X-series even, I don’t know).

The results are actually completely plausible. Compare the 7700K (4 cores @4.2Ghz) versus the E5-2699 v3 (18 cores @2.3Ghz): 5646(single)/17121(multi) versus 4080(single)/28394(multi). Higher clocks are more important here than core count, that’s just how Geekbench happens to scale. Many real-world applications don’t scale linearly either.

Well in the benchmarks they took even compared to some AVX workflows AMD came out on top. Significantly so in some cases (30-40%).