Graphics Card

OK, I’ve got a GeForce2 graphics card, and I want to upgrade, my blender work is getting complicated enough that I’m killing my GeForce2. I’m considering a nVidia Geforce FX 5900 XT, is this a good card to get, or would I be better off with an ATI. I don’t want to spend much more than $180-$190 for a graphics card.
Thanks

You’re better off staying away from ATI and sticking with nVidia.

Martin

Right now there’s the FX5200 or 5700 that’s well within your price range, about $80-120 USD, maybe less. It’s a really good all around card. Nvidia has got a new architecture now that they say is 3x as fast as their last cards, the Geforce 6800, but the price is 300 for the budget one and it’s not released yet.

I think you couldnt’ go wrong with the FX series now, but if you’re hardcore you can save up and get the 6800 when released. It should have been out already, they’re saying within the next few weeks.

I’m looking more for modeling than games, and I’ve wondered if a quadro woulden’t be a good too. It’s not so much that I don’t have the money, I just don’t want to spend too much on a graphics card.

I have a simple GeForce4 MX 440, and it works fine for Blender. The graphics card doesn’t effect render speed, only interface speed. The 5200 or 5700 are good all around cards. Anything MX 420 and up should run Blender just fine. PS. Does any Blender user have a 6800 yet? Comments on it if you do!

I have a nVidia Gforce 2 MX 400, and it seems to work fine. Whenever I get slowed down, it seems to by my 525 mhz. Pentium III, not my graphics card that is causing the slowdown.

I’ve got a 1.8 Ghz AthalonXP, so that’s not what’s slowing me down, it’s just that I’ve been working on some rather complex models, and my GeForce 2 doesn’t cut the mustard when it comes to swinging the models arround, even in wireframe, but it’s worse in solid mode.

My Velociraptor has 6884 vertices and it runs fine on my mx440. How many vertices are you talking? I’ll run a test on mine if you like.

I believe that we are having a slight mistundstanding. Does the graphics card affect the speed of the computer’s rendering the wireframe in the 3D window, or does the processor do most of that? Does anyone have a definitive answer? I would think that it would be mostly the processor. From what I understand about computers, the graphics card more does fancy visual effects. For this application, I would think that a deficite in speed would would come from the processor’s inablitiy to process the virtice’s coordinates fast enough.

As far as I know, and I find this quite disappointing, Blender has no way to exploit the power of the graphics card to do, or even to assist with, rendering.

A Quadro card is simply the consumer level card with more MHz and memory. All the functionality is there on the consumer card, it’s just been disabled through a device ID. This can be overwritten by changing the bootstrap driver with RivaTuner (available at http://www.guru3d.com/rivatuner/). After applying this patch, your card will have the ability to support antialiased lines and other ‘professional’ features.

Two things though… Blender doesn’t support the professional features, so there’s no gain from a quadro card. Second is that applying the patch can slow down some games by a few FPS. I have it enabled for use with Maya, but it doesn’t make a difference in Blender.

I would highly recommend the GeForce FX 5700 over the 5200. The 5200 actually underperforms my Geforce 4 TI4400 (modded to Quadro4 750XGL BTW). It’s only selling point are the increased number of pixel shaders, which have no bearing on Blender’s performance.

CG is all a tradeoff between speed and quality. A full raytracer gives realistic high-quality effects like reflections, refractions, GI, focal/motion blur, etc, etc. Blender however (initially) took shortcuts and approximations (scanline renderer, envmaps, bufshadows, etc) which is why it’s so much faster than eg yafray (however it’s now getting raytracing-oid features built in to it, which of course slow it down)

However, your standard realtime game has no need for dynamic GI, or a lot of that other stuff, but it needs to be fast (a game that takes 2-3hr to render a frame would not sell well ;)) and graphics cards are designed to give the fastest result they can, and any extra realism that can be added in the GPU-time remaining is basically a bonus.
Every possible shortcut is taken. Reflections are often pre-calculated with only active objects rendered and composited in. Objects are low-poly with normal maps. Faces are removed left, right and center with backface culling, adaptive decimation with distance, etc, etc.
Admittedly they’re getting better, and I think some of the newer cards even do proper ray reflection/refraction now, but it’s far from perfect.

Now, I’m not saying this is a bad thing, but you would hardly be able to get a photorealistic image out of it. And that’s where blender’s renderer seems (to me) to be heading at the moment - less shortcuts, more raytracing. Every release there seems to be some new toy that makes stunning renders and takes forever to calcluate.

Personally I’d like to see hardware-assisted options in blender, but I don’t think it’s going to happen. Not for a while at least.

Woah, I had no idea I’d typed that much. Sorry :stuck_out_tongue: