The reviews for Kaby Lake are coming in, and the conclusion is that the only reason it slightly outshines Skylake in performance is a bump in the clock speed (with more or less no improvement whatsoever in IPC).
The only upside is for those who don’t want to put a discrete GPU in their machines, the onboard graphics get a significant boost (but still, the disappointing numbers are leading the tech. writers to put their hope in AMD and see if Ryzen’s impressive speeds are not just marketing).
That’s not to mention that Cannonlake will likely not bring much, AMD now has an opening and it will be up to them to encourage people to upgrade before their existing machines break. The CPU wars were fun times for users, if Ryzen does not impress then it’s possible that PC performance is likely at the end of the line until the world moves away from Silicon.
Reading over the reviews/benchmarks, I’m more than happy with my 2 year old 4790k, especially with my OC. Kind of pathetic on Intel’s part.
Definitely looking for the Ryzen release, will be moving 2 PC’s across (home server + general use PC) purely as a statement, possibly my own as well if the multithreaded performance is there for the right $$
While that would be true, Intel hasn’t given any indication that it plans to move beyond 4 cores with 8 threads for consumer grade chips in the near future (they haven’t done that for many years now, so unless you have several thousand dollars in spare change or the know-how to build a machine from scratch, the benefit of having yet more cores on the same die will be out of reach).
The best bet of that changing, in my opinion, is if AMD’s Ryzen makes good on the rumor of it making octo-core chips mainstream and forces Intel to respond.
If you’re going to be seriously rendering for production animation or VFX, GPU isn’t an option anyway…
Yeah, good luck rendering Moana on a GPU with 8gb of RAM…
Agree but if you’re in that business then consumer CPU is not what we should look at. Also what happened to render layers and shadow catchers. I remember when we had to be smart to render big.