CPU Wars; Intel's new 18 core monster and X-series chips

They already have dethroned Intel. AMD is selling more than Intel right now. Why should one buy something twice as expensive just to be 30% faster? Also performance wise it’s only 18% faster per core than AMD. AMD should not hurry to make a faster CPU because it makes no sense to do so. Intel is literally twice as expensive.

Some people want all the performance they can possibly get, regardless of the price. when looking at a machine that is 30% faster, an extra $1000 might be worth it for someone.

it isn’t always about performance per dollar, total performance is a critical metric for some users. If they are going to be building a big render farm, then performance per dollar is pretty important, but if they want a blazing fast workstation, high total performance is important.

I’m just happy we are having this debate in the first place! Just a year ago that same dollar amount bought you less than half the total performance for multi-threaded tasks.

Another thing to be happy about is how AMD even had a plan to counter Intel’s enthusiast chips. Recent articles reported on how Threadripper was the product of an enthusiastic team of engineers at AMD and how they would otherwise have nothing like it.

There’s also initial rumors coming out about a possible 32 core Threadripper chip, but that has yet to be confirmed. I also read that some applications may not even run if they get presented with a chip with this many cores, so you take a risk with either company.

yes, the 32 core and some 12nm rumors as well. I’m still not buying anything until next year so…atm it’s just highly interesting to watch.

Only problem with AMD is it can’t overclock at all really. If you can cool that 18 core cpu from Intel I’m sure you can get 4.3 ghz+… on 18 cores that is ridiculous and will beat most dual xeon setups

AMD can OC to 4.0GHz and intel can OC to 4.8-4.9GHz.

On overclocking, there was that incident where Intel told enthusiasts not to overclock the Kaby Lake chips to avoid overheating issues.

In any case, Intel’s recent chip models appear to be clocked about as high as they will go if you want stability (especially with Skylake X and the Turbo Boost Max technology). In other words, they are already overclocked as soon as they leave the factory.

When time equals money and you need the most powerful single cpu setup the 2k is easy to swallow. 30% faster project renders means it takes you 7 days instead of 10 to render. That is important. And don’t forget bragging rights :wink: just ask any Apple fan. Current iPhone X is comparable to a two year old Android but people still buy it.

Now jokes aside. Having single most powerful cpu means more projects completed meaning more money. So return on the 2k investment can indeed pay of quickly if you actually do a lot of projects.

I’m personally thinking dual AMD l Epyc as my next build. Just waiting for a good dual board to show up.

While I agree there is value in time, the only people who will really be paying for those cpu’s are larger studios. You are not just paying 2K for a cpu, you are also paying for support in most cases.
If I had a render farm, I would still use amd, if I burn up a few 2K$ procs it would get costly to replace them, I would find 1K$ a lot easier to swallow. Not to mention I could then afford twice as many of them…so even if they come in at only 85% as powerful…I still have two of them for a 170% as much speed as one 2K chip…

But seeing as how NONE of us here are these big studios…no one here can really say…this is all 100% speculation and here say. I am just glad the market is finally moving forward…even if it is slower than I would like…let’s be honest…intel is still pushing tiny updates and AMD has just caught up and hanging on by the skin of their teeth(technically speaking)…the only real deciding factor that I see is pricing at this point. and AMD wins that hands down…price to performance…

I just re-read this, and I think I sound kinda "AMD fanboy-ish" tbh...I really do not care, but the truth of the matter is I have a family and limited funds...so for me the winner will always be the most reliable and cheapest...and never top of the line...I usually aim for the "runner up" proc. when it comes to my buying decision.

Agreed that for most of us AMD is the way to go. But plenty of high end youtuber’s that need to render their vids and get it posted on time, which is quite critical for this platform, explain why they go Intel, especially the high end.

So i don’t agree that only larger studios.

Large studios go for Xeon workstations from companies like Dell/HP/Seamicro/etc. They commonly dont’ build their own systems.

This i9 18x beast is just a desktop so not that much support.

And I’m a AMD fanboy, have been since… K6-2+ days :slight_smile:

You can’t be serious when you expect the Intel 18 core part to be overclocked to more than 4 Ghz. The 10 core part already overwhelms every air and AIO in the market at 4.5 Ghz. Tom’s did some testing and they were unable to keep it cool with an Alphacool chiller (for the not well informed, better than a custom loop).

All-in-ones like Corsair’s H100i and Enermax’s LiqTech 240 hit their limits at stock frequencies under Prime95. The custom loop threw in the towel at 4.6 GHz.

Although we’re using some of the highest-end and most expensive cooling hardware available, we still measure up to a whopping 71 Kelvin difference between the cores’ reported temperature and the heat spreader’s top. Obviously, a more mainstream closed-loop liquid cooler under full load would look quite silly.

Or maybe you’re talking about delidding a $1000 CPU and even then there is the problem of overclocking a production machine.

Well delidding is an option which I would totally consider If could get 4.3+ ghz on the 18 core.

ATM. I have a 7700k on a h100i that hits 95c at 4.9 GHz but my 5820k at 4.5 hits 75c Max on a kraken x61 and my 1700 at 3.975ghz can hardly hit 65 on the h100i

Surely at that price you would be better of picking up a single socket EPYC CPU?

The problem is not only the temperature ( personally i use self made h2o loop ) it seems the PWM on motherboard are not sufficient enough… ( many report dead motherboard ) … but maybe some specific motherboard have fix this in between.

Intel ( like with the 7700K ) is pusing the clock allready really high it seems.

Also, deliding is not an exact science, im not sure i want tempt this on a 2K $ cpu …

Then you should have a computer with i7-7700k for work because it has the fastest single thread performance. Also two systems with threadripper so you can render from the network. it will cost you a bit more than one system with Intel but it will be 70% faster.

Well if im spending 2k for the best im going to make it the absolute best… Its a huge risk of course but the rewards are huge. My 5820k at 4.4 - 4.7 Ghz beats any stock 16 thread i7 in cycles… Its pretty surprising honestly

Price to performance is the key. Right now, Intel is still too expensive relative to the 1k 16 core Threadripper. I expect AMD to release a 32 core Desktop chip as the next counter move. Threadripper is just a 32 core Eypc CPU that’s been binned for desktop consumers. With Intel pricing 2 more cores for 1K more, I don’t think AMD needs to worry.

Power consumption and heat are quite bad on the 16 and 18 core cpus from Intel. They are going well over the advertised and rated TDP. Typical game that Intel plays with wording and marketing. Also due to the bad VRMs on a lot of the X299 mobos out there, there is very little overclocking headroom and still no ECC on top of that. For twice the price as a Threadripper, hotter, more power hungry, and lacking features?! Yeah I don’t think so. Intel’s chips may be faster but for what you would use the chip for (workstation class workflows) without ECC it’s a no go. Why do you think OEMs stick with Xeons? Right now these chips are for suckers and “gamers” who have more money than sense, imo. At least Threadripper is a viable alternative to a Xeon for workstation class builds. Core i9 makes no sense imo.

Here is the 18 core i9-7980x at 4.5Ghz Its time is 1:44 on the BMW benchmark beating the 64 thread servers AWS offers (That is roughly the speed of 2 1070s on the same BMW scene). In my tests the 64 thread machines on linux gets 1:47 (the i9 time is on windows)…

Cinebench scores go from 3300 at stock to 4200 at 4.5Ghz

So tell me again why overclocking isnt worth it xD of course its going to catch on fire and everyone will die but dude… did you see those render times?

The AWS machines are quite impressive on their own… Heres 10 of them on the BMW scene