AMD Zambezi/Bulldozer - big fail?

Well, if anyone thinks it´s more appropriate in techsupport, report it for moving, personally I think news regarding a CPU fits Blender and CG discussion as it is a vital component.

I think by now many noticed I am beyond techsavvy and try to stay objective regarding hardware, and I really would have wanted AMD to have a huge success with the Zambezi processors, the first batch of their Bulldozer architecture based CPUs.
I mention this because only a few know that the processor is called Zambezi, the architecture is called Bulldozer.

AMDs flagship is the FX8150.

I saw the first benchmarks I trust after the fall of the NDA and the launch today, and to sum it up:

The FX8150 is an Octacore CPU, that uses 222W with turbo and is currently listed with 224-249 Euro, not available in the stores yet.

The Ci5-2500k is a Quadcore CPU, that uses 147W and is listed 175 Euro, available for a year.

Most benchmarks are for games at this point and the Ci5 is usually 10-20% faster - and it doesn´t even has SMP (symmetric multiprocessing) and runs with 4 threads, not 8.
In software that supports manycore CPUs the Zambesi might score.

Sorry AMD that´s not how you do it. Just remains to hope that the Zambezis get cheaper faster, or that the Zambezi 2 is better. Seems history of Phenom 1 fail to Phenom 2 win repeats itself.
Just this time Intel is already ahead over a year and Q1 next year the new Ivybridge Octacores should hit the market.

I yet have to see Cinebench scores for the CPU, but with the information I have today I find Zambezi highly unattractive.

It seems my next machine will have an Intel CPU, and an AMD graphics card, unless the Nvidia 600 series really has no OpenGL issues as rumor has it, or the HD7000 series has any odd issues. But currently I am more sympathetic to AMD graphic card products as they seem to be the “rounder package”

Cinebench scores:

so it cost more, is more expensive to run as it consume about 75W more of electricity and it runs your games slower. way to shoot yourself in the foot. AMD seriously needs a win

@aremxa dude you are like the most seriously hardcore tech head I know, I always include your posts in any research I do if am about to look for a GPU or such.:smiley:

why does everyone think games are the only things processors are made for? :stuck_out_tongue:

for most regular PC users that is about the only app that can stress a modern PC, email, word, excel, surfacing the net are not going to cut it.

for blender heads it maybe comes down to things like rendering, or running simulations but people who use things like 3d apps or other CPU intensive apps don’t tend to make up a large % of the population.

but hardcore gamers are plenty in numbers so you ran benchmarks for that crowd.

Games create the need for faster CPUs a lot faster than scientific applications and thus initiate the development.
If you have to do scientific calculations you buy a bleeding edge insanely expensive machine and use it as long as possible.

The average gamer upgrades his CPU as soon as the games don´t run “fast enough” anymore.

In the private sector there are really 3 markets that need CPUs with more than 2 cores:

  1. The gamers, which generate revenue with quantity.
  2. The bleeding edge scientific applications which need top of the crop CPUs, which are usually from Intel, that´s not even up for discussion and they generate the revenue with high prices.
  3. The small group of users that need a CPU for raytracing or compiling, which need a fast cpu, but not a highend cpu… with a good price:performance ratio.

In the industrial sector, you buy 100 pre-assembled machines and trust your supplier to do the right thing, or your IT department to order the right thing. You aren´t intrested in benchmarks.
So for the small enduser community game, raytracer and GNUCC benchmarks are what is intresting.

So:
For group 1) a Ci5 is the best choise. Cheap, fast, low power consumption.
For group 2) either a server CPU like the Xeon, or a Hexa, Octa or Decacore CPU from Intel is the weapon of choise which are insanely expensive and AMD doesn´t even have a product to compete with them.
For group 3) a Ci7 is the best choise. “Cheap”, almost as much power as the high end hexacores from intel.

The alternative for 1) and 3) is the Phenom2 x6. It´s not very fast in games, the Ci5 wins, but it´s a tad faster than the Ci5 in raytracing. It´s cheaper but not much, uses more power and is just 3 dualcore CPUs patched together.

@bleethob:
Thanks for the Cinebench Benchmarks. My worst fears seem confirmed. :frowning:
How is it possible to develop a native octacore for over 4 years that´s almost outperformed by a quadcore with SMP… at reference clocks that is.
The Bulldozer architecture was announced ~4 years ago IIRC, it´s late a year and can barely compete with the first Ci7, not to mention the Sandybridge generation. And while Intel is about to pop Ivybridge, AMD looses ground more and more.

Personally I think they should focus on their Llano platform. Their general purpose chips with CPU+GPU in one, might be very attractive to a lot of games to replace a console. If they get game studios to use them as reference, you can release games and say runs low with this Llano, normal with that Llano and for high settings you need that Llano.
It might bring game development back to the PC again and we don´t have console ports en mass anymore.

They should take the dudes from the Bulldozer project and split them on the Llano, Radeon and driver development and produce 2 good products instead of 3 at best average ones.

It’s sad because a little competition in this area would behealthy. I’m very happy with my Intel cpu, but I’ve a feeling that I might have gotten it a little cheaper if there had have been a viable alternative. The GPU sector is probably just as frustrating but that’s another story. Ultimately, monopolies are not good for consumers and two company industries aren’t much better!

I’m looking to upgrade due to my LGA 775 based motherboard breaking on me.

I’m going to go with AMD this time around to see what they can offer.

I was thinking of getting either a 1090T or a 1100T, but the FX 8120 8MB 3.1Ghz CPU is a mere £10~ more.

I understand in single-threaded application performance is abysmal, but I’ll be mainly using my system for sculpting (It uses partial multi-threading right?), heavy visual effects work I.e. baking simulations, rendering massive amounts of smoke etcetera and video encoding/ decoding. Which I’d imagine would favour a CPU core which is faster at multi-threaded tasks.

So yea, do you guys think it would be worth it. I know current the Phenom II series is a very capable line of processors, especially the x6 line, but surely there must be some benefits to the new FX line.

EDIT: I could have an FX 8120, Asus EVO 990X, and 8GB of DDR3 1600Mhz for £315 which sounds like a reasonable price for a new base set-up.

So how fast does Cycles run on it? :rolleyes:

probably slightly faster then the core i5 2500 and slower then the 2600

Usually Toms Hardware has a 3D Max render and Photoshop test on their site, which is more in tune with what we are doing. I don’t know how many cores either Max or PS is aware of though.

They even did a Blender benchmark. Although I’m not sure which version they used.

Enjoy.

Cheers!

I was puzzled as to how the ‘new’ octocore got blown away by the sextacore in the Max test, then I read down and found that it was just edged out by the Intel chip in good old Blender!

So that does look like poor old Max can’t see 8 cores, especially as PS pretty much backed up the Blender result. Looks like I need a new AMD processor!

EDIT: They used 2.59. Also, on the video test, they used BBB :smiley:

EDIT 2: arexma, why on Earth would you want an AMD graphics chip? Especially with Cycles and all the driver issues with the former ATI cards.

Although the Phenom2 x6 seems a lot more sexy than the Zambezi, it´s really HEXAcore, not SEXtacore :stuck_out_tongue: SCRN :smiley:
But you´re right too, for some reason they just choose to go with greek, not latin…

It´s not always about cores. Not everything is multithreadable, it´s not faster for some things, so the single thread performance still counts.
It´s shown impressively here:


1 Thread on a Ci7-2600 will run twice as fast as on a FX8150.

John Fruehe, director of product marketing for server products at AMD, says he doesn’t like the performance per core comparison on the server side because it knowingly favors Intel.

Server WTF? Sorry John, you´re ignorant. It seems AMD didn´t notice a majortiy of tools are still either single or doublethreaded.
Even the games that support more than 4 cores are countable on a hand.
If someone knows “Related Design” and their work for Anno1701 and Anno2070 (I think the international title of the game is different) you know they are evil geniuises. Their engine supports as many threads as the machine can handle, but like they say at some point it´s not logical to split it up more. You got a few basic threads a game needs and thats that, the more threads you create, the more time you loose in the bottleneck of synching the threads.
Obviously all the things have to happen “at the same time” it doesn´t help if you have one thread for the weather and one thread for the pathfinding of your units, and the weatherthread is done 3 times faster than the pathfinding thread, so all other threads have to wait for the slowest thread. Always and the more threads you have, the higher the chances it will slow down drastically because of the bottleneck synching the threads.
That said, manycore CPUs are not the holy grail, especially for gaming, which AMD´s new FX chips want to target.
Manycore CPUs are the solution though to a task that is splitable in many threads that can run independently like rendering. No one cares if chunk A of an image is rendered faster than chunk Y. If a thread is done, a new one is started.
Sadly, in this discipline Bulldozer also fails.
And in tasks that only need a single core, it fails too.

Just yesterday I might have already bought one. If it´s available and I was the first to mail, I got a HD6970 now, used for 140 bucks and Cycles will be OpenCL at one point, but that aside with an objective view:

AMD cards upsides:

  • offer a lot more VRAM for a cheap price -> huge textures for blender viewport, high AF and AA levels for games, try to get a GTX500 card with 2GB+ VRAM. Also when sculpting and using VBO, you need a lot of VRAM, because the gemoetry sits as displaylists in the video memory - ideally, so you don´t need to transfer data from the device to the host which includes synching the graphic card memory with the bus clock and transfer it between RAM<>VRAM which is compared to pure VRAM manipulation abyssimal slow.
  • offer a lot more power for AA in high resolutions -> 1980p for gaming with 32x SSAA? No problem.
  • offer superior OpenGL performance compared to nvidia, the downside are the OpenGL drivers are sometimes buggy.
  • offer superior GFLOP performance in single and double precision. Nvidia cards are only faster (slightly) in GPGPU tasks because of CUDA. It´s highly optimized for the cards, it´s nothing where OpenCL can compete, they simply lack the development power.
  • Eyefinity seems a lot better than nView. While nvidia supports multiple displays for ages now it still causes all sorts of strange problems and the linux support isn´t that grand either, but thats the fault of XFree86 and x.org.

and the downsides:

  • not the best Linux support, doesn´t matter, running windows 90% of the time
  • flickering AF, doesn´t matter for Blending, is an issue when gaming and getting better with each driver version.
  • no CUDA, I´ll still have a GeForce for CUDA in my machine, I don´t mind running 2 high end graphic cards if it suits my needs.
  • no 3DVision, for Stereo3D projects I have a dedicated machine in the studio, usually it´s just needed in the end of a project when setting up the stereo cameras, as blender doesn´t support realtime stereo3d viewport. It could, but it´s a feature nvidia made sure to lock out of the driver unless you buy a quadro. (did i mention “nvidia my ass” lately?)

You can´t generalize and say Radeon or GeForce are better, the questions always are: what is better for MY needs? With what downsides can I live?

All I can say is: Sceptical interest over diffuse rejection!

can you give us what we can expect in next 2 years from Intel in terms of CPU speed memo and threadings

i’ got a dual core with intel chipset G31/G33 which is not as fast as with Best GE card but don’t really mind for now

but would like later to get something a little faster and more mem to be able to get speed in blender while editing and for endering and all that at min cost if possible !

thanks for your feedback very intersesting

intel develop their processors using their tick, tock model. a tock is a new micro-architecture we are currently on a tock with the new sandybridge the next release ivy bridge will be a tick, so its not a new micro architecture they will shrink the die to 22nm and increase the transistor density so it will be faster and more energy efficient but it will be a smaller more refined version of the sandy bridge architecture.

Haswell will be the new micro-architecture but those chips don’t hit stores until 2013. than you will get another tock and than Skylake but these chips are years away from been released.

unless Arexma works for intel I don’t see how he can tell more than you can find out if you just Googled it yourself. and even if he worked for Intel I doubt they would want their engineers flapping the gums to every Tom, Dick and Harry about things the are working on.

well for blender i find it quit interesting that intel is still pushing and using this 22nm should almost double the speed again
and that will help everybody and blender shoud be much faster
hopefully all the other GPU will also be done with this 22 nm technologie and give us also another speed increase

i don’t have any problems waiting another year to get this new technologie and it will be interesting to see it at work!

so blender will be more powerfull then ever and more fun to use
thanks

A die shrink won’t led to a doubling of speed.

So in other words for my new machine I should stick with my plans to pick up the i7-2600k, eh? :slight_smile: I was kind of hoping AMD would sweep in with a beastmode CPU too.

You should do it quick. Since yesterday the i7 already raised 10% in price.
Many where waiting to see if Zambezi will offer competition, now those who waited buy intel, naturally they raise the price due to the huge demand.