NVIDIA's Ferni

Hi everyone,
What do you think of the announced next NVidia chip?

Have a look here: http://www.nvidia.com/object/fermi_architecture.html

Have a nice day Everyone

Almux

EDIT: It’s FERMI… actually, not ferNi… Sorry! :wink:

yup, it’s looking reallllly goooooood.

drooooooooooooooooooooooooooooooool

thats all I have to say. :slight_smile:

One thing which is more than interesting for all of us, is that this GT300 will be fully C++ compliant, so it means it can be used as CPU computing for any application, like Blender for rendering. No need to adapt some code to get CUDA working as before, this is really good news.

By the way the purpose of the Intel’s Larabee project is precisely be able to compute anything you want, and be able to render 3D games as any other conventional 3D GPU (the path is the exact opposite, but the final purpose is strongly the same).

I would personally go with larabee. Running optimised blender on linux kernel customised for larabee may end up being faster than on GT300.
Developers would know better about it though. I’ve never compiled a linux kernel so cant say.

Well for now we don’t know much about GT300’s and Larabee’s performance, so I suggest to wait and see what happens when both are out. But this won’t change anything for me, because I don’t have money for a new high-end GPU (still with some old 7600GT lol).

They’ll be in mainstream market in 2011 i guess?? I think there is plenty of time for me to do some work and save money for that upgrade.

@EBrain & ankit_pruthi
Yes, it’s for the futur for me as well. I shall not make any great hardware upgrade till 2012 or so… But hints like that drive to feel good about next gen evolutions! :wink:

Well, I don’t know 'bout larabee but supposedly gt300’s going to be shader clocked at 1,6ghz and will have 512 of those cores.

that’s quite a lot, I’d say, like 256(edi, sorry 512 without the hyper threading!) instances of my netbook(atom based). :slight_smile:

Well, I don’t know 'bout larabee but supposedly gt300’s going to be shader clocked at 1,6ghz and will have 512 of those cores.

that’s quite a lot, I’d say, like 256(edi, sorry 512 without the hyper threading!) instances of my netbook(atom based).

Shaders have a different architecture than an intel atom processor. Each shader can carry out a specific calculation at a time (correct me if i am wrong though.) I doubt if it’ll be faster than intel atom.

But overall, 512 shaders is definitely a lot.

i also find larrabee more interesting and promising.

you can’t just compare core numbers. the cores are very different in their abilities.

even if fermi is fully c++ compliant (which i kind of doubt) it doesn’t mean that unoptimized (for fermi) c++ code will run very fast.

Untill we see numbers for either Larrabee or Fermi, it’s completely speculative on which will do better in which situations. It will be interesting, to say the least, to see how these two cards will perform.

Most important, i think, is to see the tremendous pushing ahead of hardware evolution.
This is very exiting and promissing.
:wink:

i read some more articles about fermi and it seems to be closer to larrabee than i thought. it supports stuff like branching, exception handling and recursion. features that are very important for ray tracing and physics calculations.

so far CUDA and so on have been a huge over-hyped disappointment but now it looks like things finally will get interesting once fermi and larrabee are out.

I could see it as soon as 5 years from now, thousands of people have supercomputers on a chip in their CPU towers and make today’s computers look like antiques.

Personally I would like to see Blender use OpenCL before adding full support for that chip, which is best to wait till CG artists, game designers, and animators buy that chip en-masse.

Not to high-jack this thread, but since I don’t want to create a new when it’s somewhat related I’ll just post here. Saw this news item a few minutes ago:

I’ve always been a Nvidia user (great Linux and OpenGL support) but am a little curious about ATI now with their effort to open up their HW and work with the open source community (albeit a little slowly). And now with this talk about accelerated Bullet physics when Blender (finally!) will get it outside the GE. If only ATI would treat Linux and OpenGL as first class citizens and not only with their ridiculous “FirePro” cards…