Isn’t the holy grail of cycles rendering, the ability to set your render going at a reasonable speed and carry on with your work or another project in real time without compromising performance in the viewport or anything? Seems to me rending on the GPU locks the system up. But if i had a 32 core new generation threadripper i could assign say half the cores or more to the render and still have many cores/threads left over plus my graphics card to carry on working in blender whilst the reasonable paced render is happening??? Would save the cost of buying an expensive GPU just because of impatience and needing a render done in 5 seconds…
Hi, I would like to buy one but it cost 2000$ and I need a new main board as well.
I jump up to the GPU train for Octane 0.9 Beta and now I am kind if stuck with it.
If I want more power I simply buy another GPU. For example I bought a RTX 2060 lately and it is 7 times faster than my old GTX 760. I use 760 for display and render on the 2060, no lags, I can do other work on my PC.
In my situation I will save some money and buy a new CPU/Board and maybe leave the GPU route.
I dont think there is a definite answer at all. CPU has mainly the advantage of huge memory and sustained performance on complex scenes - and the disadvantage of running roughly 20% slower on Windows builds.
The well integrated engines in Blender right now are Cycles and Octane.
So if industry pros recommend CPU rendering not only is it for huge scenes but also mostly for engines that are not available to Blender users like Clarisse, Arnold and V-Ray for example.
OTOH I could image that a used 3000 Threadripper could be good value once the 4000 line drops and you need the memory- at least for Arnold, V-Ray and Clarisse users.
If you have the money Threadripper is a powerfull CPU. If you can switch to Linux it can really fly.
The only time I hit a wall with 3970X was during baking huge smoke sim + 2 other blender projects open + web browser + music app + video files open. You basically need to use all RAM and CPU resources to experience any hiccup. Other than that, you can basically render and work on another project at the same time. Unless your second scene is another massive project you can sometimes forget that you are rendering in the background.
The cost of a CPU is only one thing. The whole platform is expensive. Second priciest thing is RAM. And buying less than 64GB for Threadripper is a waste IMHO.
Probably easier to get a second weaker GPU as your display GPU, and use Cycles on your powerful GPU. Cheaper and can be done on most consumer motherboards than upgrading to a Threadripper system (or even new).
Definitely if you have the money 32/64 cores are great for rendering and multi-tasking, and work great with your multiple GPU’s installed into a TR system.
There is a third advantage, and that is the stability of the Blender builds being better overall (with regressions that break Cycles features a bit less likely when new stuff is added). Oftentimes, GPU users have to wait until followup commits are done before they can grab a usable build for a new feature or for a performance boost. This is due to the fact that CUDA?Optix/Compute cores are simply harder to code for (even though the situation has gotten a bit better over the years).
I haven’t had that happen except when testing limits or making mistakes. At those times I’ve exceeded system memory (became unresponsive) or exceeded gpu memory (render stopped with error).
I have a modest system compared to many BA members (two used 1070 ti’s, 16gb ram, AMD FX8320). Only very large or demanding scenes use much of my cpu or system ram except initial preparation for gpu rendering (BVH etc.) so there is usually plenty of power left for other things.
If you often use very demanding scenes then the options I see are use a render farm, upgrade your hardware, or consider compromises to make your scene less demanding.