rendering with gpu

Hi. I’m a computer scientist. I do programing of problem solving calculation, like electrostatic potential calculation and other stuff, with CUDA and GPU. Those things are beautiful and pretty fast. Right now I’m starting in the computer graphics, playing and doing physics simulations using Blender. I’m using Blender 2.49 and a NVIDIA GTX 260 with a GPU integrated. I would like to do the rendering in the GPU instead of using the CPU. Can someone give me an advice? Thank you.

Blender cannot use the GPU to render and most likely will not due their specialized nature.

and any external render, like luxrender or aqsis?

Unfortunately, the only GPU-based renderer I’m currently aware of is Nvidia’s Gelato.

Considering some of the things on the horizon (Fermi, Fusion, Larrabee) though, I think there’s going to be considerable growth in the high-performance stream computing market, and if OpenCL is reasonably successful, I fully anticipate more renderers (including Blender) taking advantage of the processing power available on GPUs. Still, the future remains a bit hazy; it doesn’t look like anyone’s ready to commit just yet, and I imagine any significant movement in that direction is at least a year out.

Thanks for the info. I can’t wait for OpenCL. It will be a great boom specially for the scientific computing. Now, can i use Gelato with Blender?

RenderAnts - Interactive REYES Rendering on GPU.

http://www.blendertorenderman.org/2009/10/renderants-interactive-reyes-rendering.html#links

It looks awesome. Also the paper looks good. I need to read it a little more. But, where I can find the files for download?

specialised nature? So many people are using blender for different things, someone may even try and implement it for a summer of code project

I don’t noone agree that OpenCL is the way to go. And I suspect alot of developers, as soon as b2.5 leaves alpha/beta stage there’ll be some work on that too. if it’s not already undergoing.

You could maybe contribute ? further on. if you’re already doing CUDA stuf,

anyway regarding the GPU render, all i’ve seen is papers from SIGGRAPH, but V_Render. seems to have someting going on.

http://www.cgarchitect.com/news/SIGGRAPH-2009-CHAOS-GROUP-GPU.shtml

blender can export to v-ray but i suspect the GPU render is highly under dev. but maybe if you talk to them?

he means the specialised nature of the GPU i think. untill openCL arrived CUDA could only be used with Nvidia, and ATI also has their own tech.
with openCL you can use any GPU. but because openCL is still new there arent any renderers released as of yet.

BUT! cuda code and openCL code looks pretty much the same apart from a few changes, so porting a CUDA rendeer to openCL wouldnt be TO hard.
and the Luxrender team is interested in looking at GPU-computing in the near future

just a semi off topic along the same lines thought, Nvidia could rake in “more” money if they were to develop a really good renderer using CUDA tech if it was seamlesly integrated with blender…there are millions of blender users out there…and that is a market right there…just a thought, albiet not terribly origional.

Indeed, it is just not very cross platform yet.

millions? Are there really that many? maybe it was just a figure of speech… Off topic but how many users do you think there are? 50-200 thousand?

users I assume 2-3 million, most of us are just hobbyists or independent(indie) like myself…IDK for certain, but I think* I remember Ton stating it had been downloaded nearly 4million times, not sure, please if someone knows… correct me.

I would like to work on that, but I have other projects to do. Also I’m more in scientific computing than computer graphics. So basically I use GPU to solve physics models. Also about rendering and graphics I’m not very trained. I just would like to do the rendering using the GPU so I could do movies fasters and add more effects and details.

There is considerable interest in GPGPU for the past few years. Nvidia has CUDA, AMD has BrookGPU. There is also OpenCL, which is a collective initiative between a whole slew of hardware and software vendors.

But porting whatever current code we have (physics simulator or renderer) will not be a small task. Instead of porting existing code to OpenCL, why not just enhance the existing GLSL render mode? The Blender glsl view is already on par with the internal renderer on certain materials. Eventually, I see a hybrid renderer, where the bulk of the rendering is done in the GLSL mode, and some tough passes (ray-tracing) can be done on the CPU, and then combined to make the final image.

@mpan3, Erwin (from the Bullet library) allready did some work with bullet and openCL/CUDA so allot of things are allready done.

At the moment any GPU acceleration would have to be platform and hardware specific.

I could certainly see someone coding up something like a CUDA AO-pass option to have a fast AO option if you have the right hardware.

ie: something like this, specific purpose, hardware acceleration if you have the hardware - http://ssbump-generator.yolasite.com/

@mpan3 - yeah, there have been some pretty impressive demos of what can be done in real time using Blender GLSL and the games engine. Some enhancements there and I can totally agree with your vision of compositing a fast GLSL pass with other slower/trickier passes.

Nothing to see here - move along, move along.

I don’t think this is too far off the mark, Nvidia brought mental images ( makers of mental ray) in late 2007 so this must be an avenue the wish to explore imagine running mental ray off a cluster of GPU instead of a renderfarm. Both Vray and Mental ray have been demoing GPU based renderers lately.

As far as Blender is consider its a question of having a complete RenderAPI so that one can write intergrated exporters that work with the big three commerical renderers V-Ray, PRMan and MentalRay. There is a V-Ray script in the works for 2.5x, mosaic is also going to be ported to 2.5x so PRMan is taken care of which leaves Mental Ray