nvidia quadro with blender

Hi all,

on my new workstation i have a nvidia fx 570 gpu loaded with the latest certified driver for blender downloaded form the nvidia site , the question is…is there any other particular setting or option i have to adjust (maybe from the nvidia control pannel) or turn on to benefit from the gpu ?

I want to know your best tips and tricks :smiley: (don’t tell me to buy another gpu maybe…i know there are a lot of better cards)

Thanks

Bye

The only advice I can give you is to turn off anti-aliasing, this will slow down your blender 3d view.

As i told you in your other thread, there are no significant benefits in the FX570. The FX570 is a low end quadro based on the G82GL GPU and you would have been off better with a any G92 8800/9800 for a few bucks more…

However with the quadro driver installed you´ve done all you can. The only advantage you´ll have is a slight higher framerate on massive amounts of shaded polygons and the benefit of the fast vRam of the quadro for huge texture processing, where the FX570 lacks performance because it only has 256MiB Vram and only 128bit bandwidth.

The only real advantage of the FX570 is for low end CAD workstations where the certified Line AntiAliasing comes in handy.

how about a quadro fx 1700 ? same as above ?

by the way that’s my blender benchmark result , i did the benchmark that you can find in the bigbuckbunny dev blog

Benchmark Results, Screen Size 1680 x 1003

Overall Score (FPS)
gl : 25.6770 fps
render : 5.40 sec

Spin wireframe view, subsurf monkey, 4 subsurf levels : 58.9040 fps
Spin solid view, subsurf monkey, 4 subsurf levels : 59.8393 fps
Spin solid view, 1000 monkes : 19.5566 fps
Spin wire view, 1000 monkes : 42.6703 fps
OpenGL image load & free, 256x256 px : 229.3683 fps
OpenGL image load & free, 512x512 px : 102.4964 fps
OpenGL image load & free, 1024x1024 px : 26.1428 fps
OpenGL image load & free, 2048x2048 px : 6.6252 fps
Raytracing with AO and area light, 8 threads : 4.23 sec
Shadowbuffer light, 1 threads : 6.57 sec

Those are my results, not tweaked, my computer as is. with another instance of blender running and all the other crap running in background with blender32 default from blender.org in xp64 on dual monitors. I´ll boot to linux later and post up the results there too.

You can see the slight advantage of the FXon massive amounts of Polys like the 1000 monkeys.
But i am using an old 8800GTS based on the G82 which is now about 70 Euro.


QuadCore 2GhZ, 4GiB Ram, 8800GTS(g82)
Benchmark Results, Screen Size 1600 x 1139

Overall Score (FPS)
gl : 18.5016 fps
render : 4.74 sec

Spin wireframe view, subsurf monkey, 4 subsurf levels : 58.7615 fps
Spin solid view, subsurf monkey, 4 subsurf levels : 49.2203 fps
Spin solid view, 1000 monkes : 7.8636 fps
Spin wire view, 1000 monkes : 14.5081 fps
OpenGL image load & free, 256x256 px : 264.4240 fps
OpenGL image load & free, 512x512 px : 110.3977 fps
OpenGL image load & free, 1024x1024 px : 26.8958 fps
OpenGL image load & free, 2048x2048 px : 6.7140 fps
Raytracing with AO and area light, 1 threads : 3.81 sec
Shadowbuffer light, 1 threads : 5.67 sec


Quad Core 2ghz, 4gig ram, NVidia Quadro FX 1700
Benchmark Results, Screen Size 1600 x 1200

Overall Score (FPS)
gl : 35.3917 fps
render : 6.25 sec

Spin wireframe view, subsurf monkey, 4 subsurf levels : 134.3369 fps
Spin solid view, subsurf monkey, 4 subsurf levels : 71.9394 fps
Spin solid view, 1000 monkes : 11.2121 fps
Spin wire view, 1000 monkes : 30.7800 fps
OpenGL image load & free, 256x256 px : 1001.2901 fps
OpenGL image load & free, 512x512 px : 267.5519 fps
OpenGL image load & free, 1024x1024 px : 62.5344 fps
OpenGL image load & free, 2048x2048 px : 16.0537 fps
Raytracing with AO and area light, 8 threads : 5.13 sec
Shadowbuffer light, 8 threads : 7.36 sec

And here is a overview on the quadro cards:

IMO a quadro is good for sculpters inblender dealing with millions of polys… but i´d get a FX1700 or higher, which is out of the question for hobby users.

For now, the only advantage of having a Quadro is in the higher frame rates that can be achieved in the viewport.
But unfortunately, Blender does not use your video card during the rendering process. So, I guess you will have to wait until blender gets OpenGL rendering in order to take full advantage of your Quadro.

Not quite sure why the big discrepancy in the Linux and WIn XP numbers for the Quadro. All of these were done on the same system.

In Windows with Nvidia 7900GS:

Benchmark Results, Screen Size 1680 x 994

Overall Score (FPS)
gl : 11.6646 fps
render : 14.44 sec

Spin wireframe view, subsurf monkey, 4 subsurf levels : 59.4629 fps
Spin solid view, subsurf monkey, 4 subsurf levels : 28.9802 fps
Spin solid view, 1000 monkes : 4.9116 fps
Spin wire view, 1000 monkes : 11.3153 fps
OpenGL image load & free, 256x256 px : 225.1920 fps
OpenGL image load & free, 512x512 px : 60.3309 fps
OpenGL image load & free, 1024x1024 px : 15.4588 fps
OpenGL image load & free, 2048x2048 px : 3.8936 fps
Raytracing with AO and area light, 2 threads : 10.02 sec
Shadowbuffer light, 1 threads : 18.86 sec

In Windows with Quadro FX 1400:

Benchmark Results, Screen Size 1680 x 994

Overall Score (FPS)
gl : 17.1026 fps
render : 13.52 sec

Spin wireframe view, subsurf monkey, 4 subsurf levels : 57.9400 fps
Spin solid view, subsurf monkey, 4 subsurf levels : 59.9152 fps
Spin solid view, 1000 monkes : 15.7021 fps
Spin wire view, 1000 monkes : 36.4647 fps
OpenGL image load & free, 256x256 px : 116.6394 fps
OpenGL image load & free, 512x512 px : 61.7605 fps
OpenGL image load & free, 1024x1024 px : 15.6409 fps
OpenGL image load & free, 2048x2048 px : 3.9369 fps
Raytracing with AO and area light, 4 threads : 10.05 sec
Shadowbuffer light, 1 threads : 16.98 sec

In Linux with Quadro FX 1400:

Benchmark Results, Screen Size 1679 x 997

Overall Score (FPS)
gl : 31.0141 fps
render : 10.25 sec

Spin wireframe view, subsurf monkey, 4 subsurf levels : 88.2150 fps
Spin solid view, subsurf monkey, 4 subsurf levels : 40.9331 fps
Spin solid view, 1000 monkes : 9.6315 fps
Spin wire view, 1000 monkes : 23.1193 fps
OpenGL image load & free, 256x256 px : 1016.2590 fps
OpenGL image load & free, 512x512 px : 275.9919 fps
OpenGL image load & free, 1024x1024 px : 72.1298 fps
OpenGL image load & free, 2048x2048 px : 17.6585 fps
Raytracing with AO and area light, 4 threads : 6.82 sec
Shadowbuffer light, 8 threads : 13.69 sec

EDIT: Just remembered that Linux was 64-bit. May account fo rsome of the discrepancy. I wonder how these would stack up against 64-bit Vista…

EDIT -2: Adding 7900GS under Linux scores:

Benchmark Results, Screen Size 1679 x 997

Overall Score (FPS)
gl : 27.1008 fps
render : 10.38 sec

Spin wireframe view, subsurf monkey, 4 subsurf levels : 70.2829 fps
Spin solid view, subsurf monkey, 4 subsurf levels : 37.6870 fps
Spin solid view, 1000 monkes : 8.3709 fps
Spin wire view, 1000 monkes : 17.9447 fps
OpenGL image load & free, 256x256 px : 979.5427 fps
OpenGL image load & free, 512x512 px : 262.9856 fps
OpenGL image load & free, 1024x1024 px : 67.7989 fps
OpenGL image load & free, 2048x2048 px : 16.7587 fps
Raytracing with AO and area light, 2 threads : 6.82 sec
Shadowbuffer light, 8 threads : 13.94 sec


C2Q9550 2.88GhZ, 4GiB, 8800GTS(320/G82)
Debian64, Blender 2.47x64 from blender.org

Benchmark Results, Screen Size 1590 x 1140

Overall Score (FPS)
gl : 44.4455 fps
render : 2.97 sec

Spin wireframe view, subsurf monkey, 4 subsurf levels : 116.5934 fps
Spin solid view, subsurf monkey, 4 subsurf levels : 68.8369 fps
Spin solid view, 1000 monkes : 14.4971 fps
Spin wire view, 1000 monkes : 32.1316 fps
OpenGL image load & free, 256x256 px : 1739.4368 fps
OpenGL image load & free, 512x512 px : 472.8161 fps
OpenGL image load & free, 1024x1024 px : 101.4941 fps
OpenGL image load & free, 2048x2048 px : 22.6002 fps
Raytracing with AO and area light, 4 threads : 2.36 sec
Shadowbuffer light, 2 threads : 3.58 sec

Wow. I am only on 3 gig and your processor is faster, but I wonder why Linux seems to be sooo much more efficient. I should probably install my 7900gs under linux and see what numbers I get.

Ultimately I wonder what the best bang for the buck is. Arexma, you are obviously somewhat biased towards the geforce cards as a value. If the 8800GTs seems so powerful under linux I wonder if your best bet, per dollar (not counting the cost of the OS) is an 8xxx or greater under linux versus any quadro card unless you get into the multi-thousand dollar quadro cards. I guess what I am wondering is what limits would you hit with a geforce card under linux that a reasonably similar quadro card (I mean architecture first then maybe price, assuming 2 to 3 times the cost) wouldn’t have a problem with. Say a $150 geforce card versus a $500 quadro card.

I know there are optimized quadro drivers for Maya et al, but I’d like the best solution for Blender. And people always throw out soft modding but 1) that doesn’t really apply in linux at all as far as I know and 2) it hasn’t really been an effective option since the geforce 6800. Since then pci-e has a chip that identifies the card that you cant hack around and what hacks I have seen have been minimally effective at best.

Also, if anyone has a geforce and quadro card and a copy of Vista 64 I’d like to see a comparison.

Luckily I only paid $100 for my quadro 1400.

This really will get an intresting thread :smiley:

first off, i wrote G82 many times… i meant G84… G84 and G92 i always mix those up ^^

Well biased towards georce is exaggerated. I am an oportunistic performance and bargain bitch ^^

My policy towards hardware is easy and (almost) never failed me.
Never buy something new.
Upgrade only once a year unless some vital component dies.
Always wait for others to try new hardware.
Understand the function of new hardware features.
Get the best cost-performance ratio.

It´s easy like i stated in some other threads already… ATI, nVidia, Intel, AMD… i don´t care. My computer has to be a rock solid workstation i can rely on, not fainting on some more polygons or shutting down in summer due to massive OC ^^.
I used AMD processors now since the K6-200 but this year i bought a C2Q 45nm… why? best cost-performance ratio. less power consumption… i had 2 ATI cards, both died and they still have troubles with openGL. so the choice of the moment is nVidia for me. The 48XX series from ATI is surely faster than actual nvidia cards, but the are not that stable and lack performance in high resolutions with high AA and AF. Granted they got an intresting architecture… in difference to the unified shader ALUs of the nvidia the ATI cars is more compareable to a manycore CPU. Its compareable in what happens when you run a dualcore optimized programm on a quadcore… you get 50,50,50,50% cpu load. However thats not the point.

For linux i have not tried yet. But its somewhat easy in windows. there are the NVstrap drivers where you can hook in between the Cards Bios and the driver API and override the Card identifier so windows, and more important the graphics card driver thinks the card is a quadro. The quadro cards use the same GPUs as the “mainstream” and “enthusiast” cards… its like Volkswagen… The Skoda Oktavia and the Audi A4 (if i remember right) use almost the same engine, produced by VW just with different quality.

Bottomline:

the FX3700 uses a G92 with 512MB 256bit ram and costs 700EU(1025USD - VAT)
Core:600MHz, Mem: 900MHz

the 8800GTS(G92) uses a G92 with 512MB 256bit ram and costs 185EU(270USD - VAT)
Core:650MHz, Mem: 970MHz

The 8800GT uses a G92 with 512MB 256bit ram and costs 100EU(146USD - VAT)
Core:600MHz, Mem: 900MHz

The FX270, 570, 1700 use the G84GL which is more less a G84 with some extensions unlocked but the same chip
The FX 3700, and FX4700x2 both use the G92 which is 1:1 the same chip as in the 8800GT, GTS-refresh…

So, especially for openGL you are better off with the non quadro.
The quadro scores because it supports absolutely insane resolutions on multimonitor workstations, on CAD…
The inital advantage of the quadro was:

“The Quadro2 offered the caveat of unifying both its framebuffers to appear as one (TwinView, now known as nView), a feat formerly only possible via the InfiniteRealitySGIPipeline of the then current Onyx Graphics Workstations (which could unify a maximum of 16 pipes to work as one).”

ATM i would look for a used 8800GTS/640 or 320 based on the G84 which should be about 60-80 Euro if you dont mind the high temperature and power consumption.

Else i would look for a 8800GTS(G92), a 8800GT or a 9800GTX. more than enough power for the blender Viewport and the cards are new ~160-200 Euro

Maybe i am totally wrong, and there are other significant benefits of a quadro for blender but i don´t see them, especially none that justifies their price.

So can you say something about the difference it makes when raunning a gf8800 with normal drivers compared to running it with these NVstrap drivers? Does it change anything concerning performance?

Yes sir i can.
Firstof to face you in the right direction, the nvStrap drivers come with RivaTuner, highly recommended tool.

Basically the quadro gains it advantage from the enhanced features used by the driver.
So the GPU is the same, just the “normal” driver don´t use the quadro features, the quadro driver does.

Installing the quadro driver with the “normal” video bios results in a “no matching hardware found”

So either hook in with the NV strap driver which changes the PCI identifier or get a Quadro bios matching your GPU and flash it over the “normal” bios. So it idents as quadro without catching the PCI identifier.

Then you can install the quadro driver and it uses all the features a quadro uses, also the application optimized drivers.

However a softmoded or hardmoded normal card does not reach the performance of a real quadro due to some PCB and architectural differences. For a real CAD or PP, VFX or whatever server for commercial or industrial usage i´d recommend the quadro, but for the blender the performance of the modded “normal” card is close enough to a quadro despite the fact that a quadro gives no real advantage at all in blender.

The advantage of the nvStrap drivers is that you can install your windows twice. In one version you install the nvStrap so the card runs as quadro for working and the other installation you leave it as is for gaming, because the quadro lacks performance for games…
Always reflashing the Bios and reinstalling the drivers isn´t really an option.

Last but not least i have to say the last card i softmodded to quadro was my GF6800 because the 8800 satisfies my needs without mod and i am not quite sure if nvidia hasnt build any lockout mechanisms yet for the 88xx and 98xx series to prevent softmodding, but its either not possible yet or it is already possible… at no point impossible ^^

That’s interesting, I didn’t know that. Thx for the infos.

I guess I’ll keep it like you; I like my gf8800GTS 640MB and don’t have a need right now to change anything… :slight_smile:

I just added my 7900GS under 64-bit ubuntu numbers to my above post though I will add them here too:
Benchmark Results, Screen Size 1679 x 997

Overall Score (FPS)
gl : 27.1008 fps
render : 10.38 sec

Spin wireframe view, subsurf monkey, 4 subsurf levels : 70.2829 fps
Spin solid view, subsurf monkey, 4 subsurf levels : 37.6870 fps
Spin solid view, 1000 monkes : 8.3709 fps
Spin wire view, 1000 monkes : 17.9447 fps
OpenGL image load & free, 256x256 px : 979.5427 fps
OpenGL image load & free, 512x512 px : 262.9856 fps
OpenGL image load & free, 1024x1024 px : 67.7989 fps
OpenGL image load & free, 2048x2048 px : 16.7587 fps
Raytracing with AO and area light, 2 threads : 6.82 sec
Shadowbuffer light, 8 threads : 13.94 sec

In my case the 7900GS still performs notably below my quadro card. I have a quadro fx 570 in my Lenovo laptop but that is a C2D at around 2 gigahertz I believe so it wouldn’t really be apples to apples. With Arexma’s scores with the 8800 that would certainly be cheaper than almost any quadro card. If anyone has other cards I’d love to see their results under windows and/or linux. At this point, I think if you can find a cheap second-hand quadro it’s not the worst idea in the world. I only bought mine because I was playing around in Maya and was having viewport issues with my 7900 that the quadro took care of. I have never heard of any viewport problems with Blender and geforce’s so I am really only interested in speed.

Considering the cheapest 8800gts on Newegg is about $160 I think if I were buying a new card today, my best bet would be a geforce card running under linux.

I would LOVE if someone would disagree with me and provide numbers. Mind you, we are limiting this to Blender performance and results so I don’t care what benefits you get under Maya, or how an ATI gets huge numbers but currently still has “issues” with Blender.

I would also be open to hearing about any issues running Blender with a geforce versus a quadro that isn’t simply user error. I have never heard anyone mention issues with a geforce and blender that wasn’t user error or one buggy driver.

Now I am even more confused. I just did a fresh install today on my laptop of 32 bit Vista Home Basic and 64Bit Ubuntu Studio. the laptop has 3 gigs of ram, a 7200 rpm drive, 2.1GHz C2D processor, and an nVidia quadro 570m. The results:

On Windows Vista:

Benchmark Results, Screen Size 1680 x 1003

Overall Score (FPS)
gl : 19.7407 fps
render : 11.23 sec

Spin wireframe view, subsurf monkey, 4 subsurf levels : 59.3547 fps
Spin solid view, subsurf monkey, 4 subsurf levels : 56.2603 fps
Spin solid view, 1000 monkes : 14.6613 fps
Spin wire view, 1000 monkes : 29.8032 fps
OpenGL image load & free, 256x256 px : 129.4174 fps
OpenGL image load & free, 512x512 px : 83.1146 fps
OpenGL image load & free, 1024x1024 px : 19.8179 fps
OpenGL image load & free, 2048x2048 px : 5.0339 fps
Raytracing with AO and area light, 1 threads : 9.46 sec
Shadowbuffer light, 1 threads : 13.00 sec

On Ubuntu Studio 64:

Benchmark Results, Screen Size 1679 x 997

Overall Score (FPS)
gl : 49.4657 fps
render : 8.02 sec

Spin wireframe view, subsurf monkey, 4 subsurf levels : 210.6315 fps
Spin solid view, subsurf monkey, 4 subsurf levels : 107.0658 fps
Spin solid view, 1000 monkes : 19.6616 fps
Spin wire view, 1000 monkes : 41.8965 fps
OpenGL image load & free, 256x256 px : 1121.7116 fps
OpenGL image load & free, 512x512 px : 319.7755 fps
OpenGL image load & free, 1024x1024 px : 70.8256 fps
OpenGL image load & free, 2048x2048 px : 18.2570 fps
Raytracing with AO and area light, 1 threads : 6.33 sec
Shadowbuffer light, 8 threads : 9.71 sec

The screen size is a little different in linux I notice but Linux blows the doors off of Vista. Additionally, granted it is some apples to oranges, but it also in some areas beats the 8800gts in Arexma’s benchmarks. And the 570 is a relatively cheap card. Maybe a cheap quadro on linux is the best bang for the buck…

That exhausts all of my videocards and systems (that are worth anything) so I don’t have any other numbers to add. For anyone that is tired of hearing about this…:wink:

EDIT: Looking at some specs it would appear that the quadro 570M has higher memory bandwidth than the desktop 570. The mobile version uses ddr 3 as opposed to the ddr2 on the desktop version. The lowest Desktop board with ddr3 is the 3700 which retails for around $800 USD. If anyone has a 570 desktop version I wouldn’t mind seeing some numbers, othrewise I may stand corrected and be back to geforce on linux.