Here’s the thing though, the 24 core and 32 core models are explicitly designed for those who do things in the creative fields such as creating and rendering CG work (hence the ‘W’ prefix to denote their purpose for workstations). Those who want a more general purpose processor should look at the models with 16 cores or less.
Another big change is the pricing, unlike last time, the top-end models have seen a roughly 80 percent increase in price (but still cheaper than Intel’s 18 core chip from last year). However, the new 16 core is actually cheaper than last year’s equivalent.
In a way, the 32 core should be a pretty beastly rendering solution so long as it has decent cooling. Comments on one site will actually note that Cinebench actually becomes less accurate with this type of power due to how quickly it completes. You could say that you are essentially getting a renderfarm on a chip.
The official reviews are still to come out, but 32 cores on air cooling is still nothing to sneeze at.
What type of X399 problems are you referring to, cause I have a Gigabyte X399 Designare EX paired with 1950X, and I’d like to ensure I avoid the issues you are referring to.
Personally so far (hit on wood) I have not had a single issue. Rendering, Gaming, nothing yet. The only thing that annoys me is that inital batch did not have a 10Gbit ports, but the new ones do. Luckily i augmented that with a PCIe 10Gbit card
Most issues revolve about RAM compatability and the general problem of using 8x16gb at all. From what i have seen the only brand that did it somewhat reliably (at 2400) was Asrock. And in general i am really underwhelmed by most boards VRM coolers apart from Gigabyte Aorus Extreme https://www.anandtech.com/show/12979/gigabytes-x399-aorus-extreme-the-threadripper-2-halo-motherboard If you are not running 128gb, I think you will be mostly bug free.
I’m asking what the point of having this cpu if you could have quad gpu that do the job 3x time faster, I don’t really see what’s the struggle of still using a cpu for rendering in 2018.
I’m really hesitating about cpu vs gpu for my next build and there’s not enough pro’s out there talking about this problem so I’m gathering information here
Hi, the only “problem” of GPU render is limited memory.
The latest cards have 16-32 GB but this not enough to render the latest Blender movie project, for example, and Blender Foundation is not Pixar.
Iirc the render nodes had 64 GB.
If you are fine with 16 GB you can save your money for 3 GTX 2080 or something.
Depending on the scene, one such 32-core threadripper may compete with two 1080Tis, all the while being able to run far more software (and more reliably).
Thanks for the feedback. I do recall that earliest BIOS had issues on all Zen setups (Ryzen and TR), but that part was fixed with continous BIOS updates.
i’m definitely far away from 128GB… so far measly 16GB (dual channel ) Waiting for the Gigabyte memory modules to come out.
yes but with a slower clockspeed , so its more of an disadvantage isnt it ? not a lot of softs out there exept for rendering and baking stuffs take advantages of multithreading, clockspeed is still king , and still is from the last decade , i dont see why this will change now in 2019? … so having a 2990x seams more of a curse if you could the choose a build with a quad sli last gen nvidia and a “simple” 4.8ghz OC threadripper 12cores with 64pcie lane that will ripp off one threaded softs and render 3x faster at the same time … ??
+the upgratability of cpus are just nightmare , and you can’t stack them up to 8 per machine like you can do with gpu rendering machines ( with pcie splitter x16 to x8 , or dual gpu card for example)
and for the memory
i dont really understand what is the struggle with vram
1 000 000 faces take 200mo
a well optimized cycles 4k material take 140mo
(8k textures are totally overkill for a 1080p film lets be realistic here)
yes a good displacement material could take up to 1500mo , but theres not usually many in a scene
8k hdri take 600mo… ect
how the hell do they need 64gig for a scene ?
i have this impression that 16gb is more than enough for scenes like i saw from spring , why do they need so much memory ? ( ps : i work on arch viz , i think that if i do a scene one day that go over 10gig , i must be doing something wrong ,am i right? shoul i really be concerned in vram in this industry ? )
is there other disadvantages for gpu exept vram limitation and osl ? like speed of loading asset into the vram maybe ? maybe physics are bad on gpu ? bad handle of particles ?
im not trying to blame cpu , im just trying to choose which componement will go on my next pc … im still confuse if i need to go cpu or gpu …
To Dodododorian96; The main arguments for going with CPU rendering in 2018 is this.
Far larger scenes due to the use of system RAM (there are 16/32 GB pro GPU models but they’re incredibly expensive).
New features committed to Cycles work out of the box while GPU users might need to wait until speed regressions and other bugs are fixed (GPGPU programming has become less of a headache over the last years, but there’s still a ways to go).
Long life, GPU models have a need to be dropped after 4-5 years due to rapid movement in architecture and no support for features that make programming for GPU’s easier (a hint from my last point).
Rapid movement in CPU power, AMD has made Octo-core machines mainstream and Zen2 may bring mainstream 12 core systems. A big advantage for the GPU in some cases was the core count (tons of them), now you have CPU’s with more and more cores able to take large kernel sizes and the full x86 instruction set.
Movies rendered in higher than 1080p today
Further movies aren’t video games where optimization is key and if it can be “ignored” it only means there is more time for other things.
For personal machine it’s probably better to go for better GPU. You not going to render anything that heavy…
i really didnt know about gpgpu problem, what are the features that are concerned to this in blender ?
as for the 5 year old life ; i think its a bad idea to stay with a 5 year old computer in this contanstly evolving 3d industry
and for the multithreading incorporation ,well i really hope you are right ! but we have computer modeling since 25 years and duo core cpu since15 years from now , i dont this this is gonna happend anytime soon
as for the vram , i think maybe for big production and films industry cpu are needed , but for anyone else more than 16gb for a simple render seems ludacris for me ;
when i see some videos that incite peoples to use quad channel 4k pbr texture for some material they could do procedurally or by converting rgb to bw + colorramp for roughness (for ex)… … … or downloading 1.6gb trees with 4 millions polys, chair with 500 000 polys and 4x 8k textures from
evermotion … this is just dumb
Have you started using the new microdisplacement in Cycles at all, having a bunch of diced meshes can easily take you over 16 GB if you really want to avoid bump-maps (which then speeds up rendering).
Good luck trying to fit such a scene on a GPU, the memory requirements can be lowered a lot if Cycles gets the ability to cache the displacement, but the cost then is rendering speed and it’s obviously not implemented yet. There’s also technology out there to let the GPU make use of system RAM for very large scenes (which I think Cycles now has for CUDA), but the performance penalty tends to be pretty sizable.
The issue with GPGPU programming is due to how CUDA cores work (Compute cores on AMD cards). They tend to be a lot simpler than CPU cores, but as a result a card can have tons of them. Because of this, they tend to prefer having programs with instructions that are a bit less complex, and a list of instructions that are long but somewhat simple can be executed at blazing speeds.
In Cycles scenes not using complex shading networks or shading technology, they render out far faster than the same thing on the CPU. When you get to very complex shading though (heavy SSS and very complex node trees), the cores will choke and will actually give you less performance than rendering the same thing on the CPU (even if the card in question is the 1080 Ti).
It’s stuff like this which is why in some scenes, a 1080 Ti might not have near as much a performance leap over a 680 Ti as what is seen when a game is being played.
actually yes im a big fan of micro polygons displacement for nature texturing, and i cant wait for vector displacement to be mainstream … as for vram , i have good result with only 2gb max of data used so im not all that worried about that …
thanks for the very useful answers , but it just make me even more confuse now , 'still dont know how to choose my componements
The situation with GPU rendering is slowly getting better though, as each new architecture is a little more capable of running the complex instructions of a fully featured engine at the sought after speeds. However, it’s still more difficult to get them to run advanced shading and sampling techniques compared to the CPU.
It really depends on what type of scene you like to do, a GPU is far better at rendering an arch-vis or simple character scene than a full Hollywood style movie shot.