Help me DECIDE! renders TOO SLOW ---- Cycles? Vray? New Computer?

Honestly forget GHz unless your comparing exactly the same processor (ie 3770 vs 3770k overclocked).

Look at real world benchmark tests relevant to the type of work we need done (cinebench, luxmark etc). Also consider overall cost of build/value, not just CPU value. A slower CPU is only more value if it will let you afford another entire build or two, to catch up the performance lost.

to get more that 54.4 GHZ of new power all you would need is 2 x 8-Core AMD CPU’s

They are are not really 8-core, but 4-core CPUs with two integer ALUs each, but only one FPU each. For rendering, the FPUs are the most important. They do run 8 physical threads though (like Intel CPUs with Hyperthreading) which does help.

If you want to compare performance, you need to take into account the relative performance of the cores from data like this and absolute data like this.

These old Xeons are based on the Pentium 4 line, which didn’t perform very well.
So here’s my estimation:

AMD 8320:
2x4 (cores) * 3.5Ghz * 0.9 (estimated relative performance) * 1.45 (estimated threading bonus) = 36.54
Xeon:
16 (cores) * 3.4Ghz * 0.5 (estimated relative performance) * 1.15 (estimated hyperthreading bonus) = 31.28

For these numbers, I’ve arbitrarily set the relative performance at 0.5 for the Xeon (based on the measured performance of 0.41 of the Pentium 4) and 0.9 for AMD (assuming that single core performance did not improve over Athlon II)
I’ve also estimated the bonus for two physical threads as being around 1.45, based on the improvement over Athlon II.

It is also indeed worth noting that modern CPUs support newer SIMD instructions sets, which may be unused for now but which are nonetheless interesting for rendering.

Well, I have to disagree here. There’s a reason companies are dumping these old servers: They are not power-efficient. Even if these Xeon Blades where faster than the comparable systems built on modern consumer hardware (which they aren’t), they’d still draw 2-3x the power.
A node with the aformentioned CPU and 8GB of ram can be built for less than 300EUR. Such a node requires no harddisk, you can just boot from USB pendrives.

my AMD 8120 8 Core renders the Cycles Benchmark just as fast as i7 CPU’s
and the newer 8320’s and 8350’s are about 10-15% faster clock per clock…
so… its a good chip as far as cycles goes.

in Mental Ray and Indigo render the AMD 8350 is Faster than than the $300 Intel i7 3770K

Now get the $179 AMD FX-8320 and OC it to like 4.2GHZ
Past even the 8350’s default 4GHZ… :wink: and you have savings. :smiley:

This sounds promising. Where can I get the 8350 in a pre built barebones? { preferably with a 580 gtx for good measure} And SFF a plus!

well its a small scene: the Japanese army is attacking from the left, The US forces are coming in by convoy to the right… in the sky…

Just kidding! No its a medium sized architectural rendering. but under NDA so cant post, sorry.

{ arent other people maxing out the GPU? Im amazed if not }

ok good advice thanks. yeah im thinking
hardware is the only way now.

http://www.microcenter.com/site/products/amd_bundles.aspx

if you have one in your area…
newegg for the rest, usually…

but right now with the sales going on newegg is pretty close to micro center
when you include in sales tax that you wll be charged at Micro Center in store pick up.
(especially if you do “combo deals” at newegg)

generally when you buy stuff try to always bundle(Combo Deals) it with another component that you are going to buy anyhow…
you end up saving lots of $ that way, its a little more hassle but worth it if on a budget.

For instance…

You are going to buy a Mobo and CPU…
find the mobo you want then see if they have a CPU you want in a bundle deal chances are they do…

Buy a Case, check if they got got a Power supply that is a good deal with a bundle that you get extra off… etc…

Cheap video card… see if you can bundle in some RAM or OS
(if you want windows this saves you $10-$15 just ordering them together)

just did some numbers…

Five - i7 3770K Systems ($299 per cpu) $1,495
$56.86 in energy Cost per month running full load 24/7

5 AMD 8320 OC to 4GHZ ($169-$179) total $845-$895
$70.90 in energy Cost per month running full load 24/7

difference = 14.04 Month
Now this is WORST Case scenario running MAX 24/7
for a Full month straight… 8320 actually has slightly
better idle power efficiency… by 2 watts.(basally a wash)

Total CPU Price difference

$1,495 - $875 = $620

Years in energy saving at 24/7 full load
rendering to makup the difference…

over 3.5 years. At my current energy rate.

@holyenigma:
That’s not really a fair comparison. First of all, the 3770K does not have the best price/performance to begin with, being 50% more expensive than the 3570K but only 30% faster.
Secondly: You’re overclocking the AMD CPU. The 3570K has been reported to be overclockable to around 4.5Ghz with just a good fan.

I do however agree that (for multicore rendering) the price/performance of the AMD GPUs is generally better and that any power savings should be weighed in with workload and hardware price.

3570K is only 4cores its an i5 no hyper-theading…

i use both, i have an i5 2500K sitting right next to me…
its off at the moment.

I only OC’ed the 8320 to 8350 default Speed which is not much.
they are the same CPU.

Anyone love doing math and statistics:eyebrowlift2:?What is the best value render node CPU considering everything and any current CPU;

-5 computer ideal
-over 3 years
-If overclocking consider heat-fan requirements/extra expense, stock cooler often don’t allow overclock with comfortable temperature.
-when computing electricity cost, factor overclocking, overheads of PSU efficiency and mobo and also power usage of CPU vs its render time.
-Also factor entire build cost with performance/dollar/amount of units, CPU is only roughly 50% of build cost. If CPU performs less you must save enough to buy another entire setup or two to recover.
-display benchmark results
-any other factors

just did the numbers again, they didn’t look right the chart i was using must have been loading the “full system”(including video cards) i just wanted to calculate the CPU difference.

Intel $31.82 Month in Energy Cost.

AMD $40.25 Month in Energy Cost.

difference $8.43 monthly fulltime 24/7

Over 5.5 Years at full load… to make the difference in CPU Cost.

charts reviews and graphs over here…

http://www.xtremesystems.org/forums/showthread.php?283627-AMD-FX-quot-Vishera-quot-reviews-info-(again-after-mod-mistake)

are you using tiles when rendering on gpu i had a scene that was using over 7 gb of ram but it fit on my 1gb gtx 460 and i can do what ever i want without lag even play video games

Yes Im using tiles. Do you say thats bad?

Fair enough, but you have to consider some other things.
I doubt the usual Blender home user will render 24/7. It doesn’t make that much difference in power consumption. You have to do a cost-benefit analysis.

And what I meant with quantity over performance, you already stated.

A Ci7 3970X costs 1100 USD, which is the fastest chip, besides some Xeons of the same generation.
A FX8320 costs 160 USD, and the Ci7 is around 60% faster.

So I can, ignoring the infrastructure, get six FX8320 for the same price and get 360% more power.

Quantity over maximum speed.
But I think you didn’t disagree with that, rather with my statement regarding old bladecenters as whole…

Another thing is that while OC’ing is nice, the power consumption grows exponential, not linear IIRC.

Still I am convinced, besides the green IT aspect, that there is a point where old blade servers for a renderfarm will pay off compared with “consumer” chips, might not apply as well, too busy to do the math, or not bored enough… :smiley:
Although I think the best option with today would be to get cheapass boards, stack them with FX8320, which have the best price:performance ratio and throw them together in an old cupboard :wink: I’d not even invest in pendrives, I’d netboot linux from a virtualization server.
You just have to do the math to see to stay economic with the infrastructure.

And I have no idea how a Bladecenter would perform against 6xFX8320, but my guts tell me it might look grim for the Xeons :smiley:

from my knowledge tiles helps alot on ram… how many tiles are you using turn it up as high as it will go XD try not to blow up anything

Concerning the Vishera vs. 3770K thing: 1.) If you bring in the 8320 you can as well take the 3770 and safely overclock it to 3.9 ( which is the max for it). ( On Newegg both CPUs have the same price. In Europe there is a 30-40 difference.) 2.) You can get insanely cheap boards for the 3770 ( at least in Europe) whereas AM3+ boards start at like 30€ more. 3.) Depending on your needs you get build in gfx with Intel. If you dont have a spare card laying around thats extra money to spend on the AMD rig. But if you are doing GPGPU you will need a dedicated card most likely anyway. But then maybe Intel Xeon E3-1230 V2 ? 4.) Might be wise to ask the devers here what they predict for FMA speedups ( Vishera has it, Ivy not.).

I would personally go with yafaray. It’s similar to vray in many aspects, and can be fast too, if set up correctly. It’s a great raytracer, it only lacks a few features (like SSS), a pass sistem (you can get passes with blender internal using the same scene anyway), and displacement (Again, you can do it with blender modifiers). But in terms of speed is very descent, and is cpu based. If you don’t have a good Gpu, yafaray is the way to go, and you don’t need to buy anything. Personally I would buy vray only if I needed those fancy features like cool SSS, great pass sistem, fast displacement and such, but in most projects I can live without them. Look at yafarays website, you’ll be impressed by the quality of some renders. It even has raytraced volumetrics…

If you use vray with Blender, there are also a few more aspects to it:

  • The vray version used for Blender (maya standalon) also works with maya obviously, and comes with 10! render node license. Other renderers let you pay for each additional rendernode.
  • vray is a defacto industry standard, especially for arch- and product viz. If not buying it, developing skills for it with the trial ain’t a bad idea.
  • With Blender you can’t use any vray material libraries, you got to build your own.
  • The exporter for Blender is good, but doesn’t include all features vray has. After all bdancer one-man-show develops the exporter.

Yafaray is a good advice to start. If it ain’t enough vray is still up for purchase :smiley:

It doesn’t really matter. Numbers matter. The 3570K has better price/performance than the 3770K, especially when you overclock it. You where simply making an unfair comparison, by pitting the overclocked AMD chip against the stock Intel chip with the bad price/performance.
And no matter how often you repeat it: Calling these AMD chips 8-cores is total marketing bullshit. Also, it’s worth noting that for single-core applications, the Intel chips are easily 50% faster at the same clock rate, so for a workstation I would still go with Intel.

To be fair, I’d actually hoped for the Xeons to outperform. I didn’t know you could get such system so “inexpensively”, that’s why I did the math - it seemed interesting. There are also systems with older Opterons which would likely outperform the old Xeons. Those might actually have a bit better price/performance than a modern system.