AMDs Threadripper 16core 32 thread is out

Reviews are plentiful

Short version, priced well, and performs

PCPER has nice review covering Blender. Using 2.78b, it is at top of all available CPU’s out there (in single socket configuration that is)

https://www.pcper.com/reviews/Processors/AMD-Ryzen-Threadripper-1950X-and-1920X-Review/Media-Encoding-and-Rendering

Power consumption is also… nice… especially when overclocked to 4Ghz on all cores. 300W just for the CPU (honeslty on par with my two Xeons 150w a peace) but even at 300W it easly outperforms my dual Xeon setup…

So how’s getting one?

I will admit, I’m more tempted by the lower 12c/24 thread and it’s 64 PCIE lanes (as soon as a motherboard comes out with 7 PCIE slots :slight_smile:

Anandtech also did a good review. page 8 relates to Rendering benchmarks.

http://www.anandtech.com/show/11697/the-amd-ryzen-threadripper-1950x-and-1920x-review/8

Blender Production Performance is laggy slow, not multicore optimize.

  1. Viewport playback slow, ~2% cpu usage
  2. Particles simulation slow ~5% cpu usage
  3. Cloth, softbody slow ~5% cpu usage

7740k at 5Ghz is better for Blender Production. Nvidia for rendering.

Looks like if you want to get the most out of the Threadripper, it’s highly recommended to overclock the thing to 4.0-4.2 Ghz (doing that will match the rendering performance to AMD’s marketing claim).

The two different modes of the chip (game and create) are interesting, I wonder if an instant switch (even automatic) will be doable down the road with a BIOS update?

Why would you turn your CPU into a little furnace for less than a 10% performance increase? These have an even worse power consumption than Skylake-X for even less returns.

As far as I know, the Skylake X series have reportedly had overheating problems unless you go to the high end in motherboards (which might negate the advantage they have in tasks where only one or two cores is needed).

If they didn’t have heat problems, Skylake X would be the perfect chip for those who want a sizable upgrade in both single and multi-threaded tasks (noting Intel even trying to emphasize that they are supposed to run hot).

For best performance with Threadripper, you need very high performance RAM as well. G.Skill has that covered with new kits going up to 3.6 Ghz (the fastest ever produced for an AMD board).
https://www.pcper.com/news/Memory/GSKILL-Announces-New-DDR4-AMD-Ryzen-Threadripper

This forum is related to blender right? There’s an “off-topic” section right. Blender in it’s current state(2.78c, 2.79rc1) won’t overheat Skylake-X, because most Blender function important for content creation are incapable to reach 100% usage, average are at ~2%. Sure compositing can 100%, but it doesn’t mean anything if creators frustratingly lag by laggy viewport, laggy modelling performancw, laggy physics simulations, etc and can’t get stuff done.

5GHz on one core of Skylake-X performs better at content creation in Blender than all Threadripper cores running at 4.1GHz.

They must be keeping the tile size at 256… same as LTT does in their reviews.
At least I assume that’s what they’re doing since my 5960x at 4.5ghz gets under a minute and a half on the BMW.
Blegh! Unhelpful results!

Conversely, it can be a frustrating endeavor if your Intel chip throttles or even crashes once you get to the important phase of rendering (Blender is also useless if you can’t get output from it).

Then there’s how the Intel line progressively eliminates and weakens features as you go further down (PCI lanes, Turbo, Cache, and even memory support). Even the upcoming 8-core Threadrippers will have all the features the 16 core has.

That’s just an issue of cooling though, one I daresay will pop up with these shiny new threadripper SKU’s if one doesn’t wait for new coolers that cover the IHS properly.
Its almost impossible to throttle any CPU these days if you’re using one of the many AIO’s out there.

Just for reference though I’d certainly OC a threadripper if I got one - the difference between stock and say 4 or 4.5ghz is night and day.

In practical sense, from where i come from (viz) commonly the creative process takes same amount of time either at 2GHz or 10GHz so those percents are not really lost for what is saved on rendering finals (stills or animations). Which, IMHO, means much more creative time at hand. There are also parts using multicore… and hopefully, the future that is still possible to go in parallel is going to do just that, now that the tool got upgraded.

it’s also easy to point out the slow stuff if that’s the intent :eyebrowlift:

Except the Skylake X chips have been shown to overheat far more easily if you’re doing everything right even.

The reason for this is Intel deciding to use TIM for their high-end parts rather than solder (not as efficient as far as conductivity for starters). The only way to resolve that is to de-lid the processor and do the solder yourself, but the issue with that is that is you violate Intel’s warranty in the process.

Considering how Intel’s profit margins wouldn’t be impacted that much by using good quality solder instead, it’s a mystery why they won’t do it. In fact, it’s possible that the only thing preventing them from kneecapping any gains made by AMD is to simply do that and lower the prices (but for some reason are deciding to stay the course).

The reason for that last part is because I was more or less leaning towards Intel for a possible new PC purchase until the story broke about overheating issues (even at stock clocks). I would perhaps even recommend Intel Skylake X as the best all-rounder if they ever decide to switch back to solder.

However, reviews on Newegg regarding processors suggest that if you want the best possible PC for Blender 2.8, want a straight upgrade on everything, and want to play it safe, AMD loses again. The Ryzens are crushed by the SkylakeX 8-cores based on early user reviews and Vega is a mixed bag at best (not to mention that you once you start thinking, you really question as to whether you really need 16 cores).

My ideal setup for 2.8 if I had the money.

CPU; i7-2820X
GPU; GTX 1080 Ti 11GB
RAM; 32 gigs
Boot drive; SSD 256 gig or more
Hard-Drive; 2 Terabytes or more


After some thought, I’m not concerned about the idea of a hypothetical purchase being AMD-free, there’s enough new customers out there to keep them going for a couple more generations (and keep Intel and Nvidia prices from skyrocketing).

AMD priority number one should be to invest more on Blender’s viewport fps/thread multithread performance, if AMD really want to sell Threadripper/Ryzen to Blender Creators.

~3fps viewport playback of “Production Benchmark Gooseberry open movie project” on ryzen7 1700, ~3fps seriously? Maybe I should have bought i7 7740X and overclock it to 5Ghz.

AMD Threadripper 1950x is not the right CPU if you only care about viewport openGL performance. It depends mainly on single thread and memory transfer speed. Toms Hardware BMW’s openGL viewport loop test shows the 1950x doing 208 secs compare to 173secs Intel 7900x or 134secs with Intel 7700k. The 1950x is not to far behind the 7900x, but the winner easily is the 4 core 7700k.

If you spend a lot time waiting for cycles renders to finish AMD threadripper 1950x is best CPU and when the 1950 comes out it will have and even better price. Task in blender that can fully utilize all those 16 cores benefit the most from the 1950x. Here are benchmarks for Cycles by linus tech https://www.youtube.com/watch?v=9voQqU73-Mg&t=345s:

[TABLE=“class: grid, width: 400”]

Scene

1950x

7900x

7700k

BMW

02:41

03:28

07:20

Classroom

08:58

11:29

25:08

Fishy Cat

00:27

00:29

00:38

Pavilion

11:02

11:50

19:11

Racing Car

05:20

06:42

12:18

Gooseberry

27:08

35:51

1:09:03

[/TABLE]

Has anyone tried these processors running OpenCL? Houdini users are seeing GTX1070/80 performance on Ryzen 7, and I’m curious if this translates to Cycles.

Why would you buy an 8 core if single thread/viewport performance is your goal??? An I3 is in this case the best. And if you want both fast viewport and fast render, you get a 11 PCI-E slot mainboard from biostar with an I3 and a bunch of graphic cards.
Regarding viewport perfs, I guess it starts to be interesting again because of OpenGL rendering? Regarding my workflow at least, 95% of the time, a laptop iGPU is enough for Blender. Only very rarely I need to see the fully subdivided high poly meshes while working.
Regarding rendering with eevee, I’m pretty sure there will be many other bottlenecks and that your CG will be idling most of the time. Try to render the default cube to full precision exr in Full HD or 4K and you will see that even your iGPU will be idling most of the time and that’s not because a current iGPU has problems rendering a cube. Even if you render and write files real time (30fps), who cares when you render 30sec shots if it needs 30sec or 45sec? You take a cofee or whatever. The 30/40% more perf you will get from a 2 core CPU will be negligible compared to the boost in multi-threaded tasks like video encoding, compositing, etc.

It’s about getting a nice upgrade for both single-core and multi-core speeds (because when you work in 3D, you need both). Even the Ryzen 1800X sees its single-core performance just barely surpass what I get with the Ivy Bridge 3770 processor, Intel’s SkylakeX 8-core provides a bit of an upgrade in comparison due to their Boost tech.

As for GPU’s, I personally see a multi-GPU setup a waste of money. The reason why is that less than a year after spending all of that money, Nvidia and AMD come out with brand new cards that are up to 50 percent faster (so what do you do, spend another boatload of money for a new set of GPU’s)? A single higher-end card these days will be more than enough for the more GPU-centric viewport in 2.8 (even for highly detailed and complex scenes).

You meant 7820X right?
This seems to have a good balance between Multi and Single thread performance.