Can blender use a NVIDIA Quadro FX 2700M 48-core CUDA 512MB


I was thinking about getting a laptop that has a “NVIDIA Quadro FX 2700M 48-core CUDA parallel computing processor 512MB”.

Can blender even use a video card like this for rendering and processing or is this card only used for other 3d/CAD apps?

And if not now is there going to be use for a card like this in 2.5?

Thanks in advance.

No, blender will only make use of the 3D acceleration for the viewport, you’ll not really get any benefits though as the diffrence between Quadro and Geforce cards are driver and bios switches (being disabled/ enabled etc).

I’d go for a laptop with more RAM/ faster CPU/ larger screen and a standard 9800 card (or higher).

Also i don’t think blender will be using CUDA cards for along time, i think the blender foundation want to make blender hardware indepent (well to a certain degree) so having support for CUDA would be bad for ATI users etc.

That’s why they could and should use:

There are precisely 0 products currently released using OpenCL, and nothing better than early beta drivers from either ATI or Nvidia. Shouldn’t we wait for an API to prove itself in the market, or at least show some signs of adoption before telling the devs they should use it?

What I meant was: if they EVER use the the GPU for rendering they should use an open system like OpenCL instead of CUDA (which only works on nvidia cards).

I had a quick question, could you own a workstation card and a desktop card and run both on the same system. Or will there be some kind of driver conflict. I am looking to get a GTX 295 single PCB and even later (much later) a Quadro FX3800 and run them at the same time. It would be even greater if I could run PhysX on the Quadro but I know I am really dreaming.

What I am hoping … maybe expecting … to see is a number of “GPU-based render nodes.” That strikes me as the simplest and most-practical way to begin to integrate what the GPU can do, into what Blender can do.

It’s pretty obvious (a) that GPUs have a whole lot of exploitable power that would be extremely useful in “non-game” situations, but (b) that the GPU does not work like the software renderer does. It’s a completely different sort of beast. Therefore, there will never be a “drop-in replacement” for the software component. But, I think, there definitely could be a set of nodes.

Maybe also, in some cases, a “use hardware acceleration when possible” checkbox for some of the existing render node types…

Hi, I’m new here, so please pardon my dumb questions.

The nVidia Tesla card has 240 cores, according to their web site. Does that mean that, if Blender implements CUDA, that you could have the equivalent of a 240-node render farm in a single box? The Tesla goes for around $1,500, but compared to the time and expense of setting up even a poor-man’s render farm, that seems like a steal.

And knowing whether that’s possible would certainly make a difference to me now, since I’m shopping for a motherboard.

There is no evidence that the Blender Foundation has even the slightest interest in implementing any CUDA acceleration, and more to the point, they have spoken out against any sort of vendor or platform specific API. As was mentioned earlier in this thread, there might be some vague hope for OpenCL acceleration in the future, but if that happens (and it’s really only a remote possibility), it won’t happen for quite some time. Likely not for years.

It’s also worth noting that each of those cores (or stream processors) is a very different beast than a general purpose CPU, and it would be difficult to speculate on what kind of performance differences we’d see. A lot of it would depend on implementation, and that all remains in the dim and hazy future.

Download Driver From HereDownload New Drivers For Free