GPGPU for rendering in blender

I am not a coder, but a friend who is mentioned a few sites to me. Pretty much the idea is to use the GPU to do the work of the CPU as it is much faster.

One part that I thought could be useful was:
“But as we started to learn about all the research efforts under way, it became clear that there were not just a handful of GPGPU applications, but a whole smorgasbord. Physics acceleration, global illumination, protein folding, neural nets and SQL queries can all be accelerated on the GPU.”
found at: ATOMIC

Would it be possible to edit the renderer in blender to make use of the GPU, or even just the fluid simulator or something?

Another link at atomic was : http://www.atomicmpc.com.au/article.asp?CIID=59427

There is a site (www.gpgpu.org) that has details specifically on this subject, as well as some tutorials for developers (under developers section) which includes a code example for a fluid dynamics simulation found at nvidia site.

I think it would be cool if blender had the ability to use the extra processing power of a PCI E card.

MicWit

Yes it is possible, but blender right now is really cross platform and independant of any hardware (mosty).

So the idea is that right now you don’t need a specific video card with blender. It would be really really hard to make the renderer use all the options in all the different video cards, and to have it work on older systems as well.

Of course you’re free to make a version that works on your computer, but the chance that it will make it to the official release is very small.

Well Blender can use plugin renderers so in theory you could take the Blender internal code, rewrite it for a specific GPGPU and release it as a plugin a la YafRay. Of course you’d need a plugin renderer for each family of cards out there but there’s no reason why it can’t be done that I’m aware of.

Its interesting. The cross compatibility of blender is a very good thing.
However, if it worked well with siginificant improvements in speed then access to an inexpensive screaming fast render box would be very attractive, even gasp…if it were a Windows machine.

At first I think it would be cool to just have a renderer for nvidia and a renderer for ati, as I believe that most people with pci-e cards would have either nvidia or ati.

I noticed in one page that opengl was mentioned, does it have support for more than just the output of graphics? If opengl could be used as the interface between the program (in this case blender) and the card, should only need the one renderer for all opengl cards shouldnt we? Would be more cross platform capable then as well.

MicWit

even if the GPU were used for just a pass like the AO pass or something it would be fantastic.

i also saw ages ago (dunno what it was) a renderer that rendered radiosity/AO renders over a period of time making it better and better. anyone know what that was, i’d love to know.

Alltaken

IPR rendering as used in Sunflow

This is possible.

Nvidia Gelato already does this. Blender has a Gelato plugin thats under construction. Gelato has now free and comm versions.
http://blenderartists.org/forum/showpost.php?p=615935&postcount=11

I am sure MarioAMB could use help. Do also search for the Gelato forums on the Nvidia website, theres a Blender thread.

Yeah but it relies on pure Nvidia, plus its an out door solution. An intergrated one for any CG card in Blender would be a boon

Well, personally I think is possible to achieve this (rendering in GPU), and using almoust any modern GPU, not only nVidias ones.

Please take a look at “RTsquare” project who now is releasing the free version of his GPU renderer:

http://www.gputech.com/Cms/index.php?option=content&task=view&id=282&Itemid=256

“RTsquare is the only gpu-based renderer that allows you to gain in speed by 10 to 50 compared to other renderers”

"RTSquare will use your GPU to boost your rendering speed. You just have to own a graphic board compatible with Pixel Shaders 2.0 (almost all GPUs available for the past 3 years).

In both free and commercial licenses, you will find improved global illumination, motion blur,area lights, blurry reflections, render elements, a new more intuitive interface, fast shadows and real time camera preview. Your animations will have the same high quality as fixed rendered images.

Philippe Biarnaix, founder of GPU-Tech believes that GPU based applications are the way of the future. Developement of the graphic chipsets for GPU programming by companies like ATI are a sign of this evolution which allow to harness the enormous computing power of GPU’s. Making GPU-based applications is therefore in line with our company’s objective to provide quality products that evolve with technological advances in the IT sector."

Yes, indeed, GPU based applications (and renderers) are the way of the future.

Also, nVidia is developing new applications using GPU power.
http://news.developer.nvidia.com/2006/12/nvidia_cuda_rev.html

P.S.

http://www.gputech.com/Cms/index.php?option=content&task=blogcategory&id=133&Itemid=266

"The RTSquare offer is composed of three versions:

  • A Standalone version is available and is designed with export plugins for the most important 3D creation studios but also import plugins for standards like 3DSMAX and FBX.

  • A Portfolio of Plugins is also available for the most important 3D creation studios: plugins for 3DSMAX and VIZ are available, plugins for MAYA, Lightwave, XSI… are on going. These plugins allow you to take advantage of the RTSquare rendering technolgy while enabling you to work inside your habitual creation studio.

  • A SDK version (software development kit) enabling the integration of RTSquare rendering technology within your own applications has been developed and is right now on the shelf for any IT developer. Due to the compatibility with DirectX, it is very simple to integrate the RTSquare rendering into your own 3D application."

So, a plugin or a total integration of this GPU renderer will be possible to have GPU rendering with Blender.

Porting the blender internal renderer to the GPU would require an insane amount of work and technical expertise due to the complexity of the internal renderer…BUT

porting a photon tracer - such as indigo - would be much easier, don’t you think? First of all, indigo is relatively stand-alone and much simpler than the blender internal renderer, it doesn’t have as much features (such as particles, curve/nurb, and even shaders) , thus making it much easier to work with. While a rasterizer(like blender internal) has complex coding, indigo is relatively simple and uses a brute force approach to render pictures. which makes it the ideal candidate as a GPGPU program.

As for OS/software/hardware/API compatibility issues, why not pick an exsisting graphic API like OpenGL2.0? If we stick with ARB implementations and stay away from vendor specific extensions such as ATI_* or NV_*, all should be fine.

Well I dont think you are right. Reread this:

" … You just have to own a graphic board compatible with Pixel Shaders 2.0 (almost all GPUs available for the past 3 years)."

Personally I dont view any AT* or NV* dependencies here. Gelato is dependent by NV* hardware, but RTSquare is proving the fact that can be created a GPU renderer without depending by a proprietary hardware. But also, lets face the reality. All the GPU market is shared almoust exclusively between AT* and NV*.

Mmmm, PCI-E card power? Integrate Ageia damn it. I know there’s some stupid legalities preventing that but yhtarwddaswaa

hmm, seems AGP cards CAN be used as well (Gelato can use them), but prob different code, and from what I understand, not as fast.

Really? What is the legal smegal?
Would that card actually help the physics in Blender?

In theory, of course it is possible to create a binary that runs on ati hardware as well as nv hardware. But the reality is, both company implements OpenGL / DirectX a bit differently, leading to different image quality, floating point precision and different performance bottlenecks. The developers might need to put in a few special routines to handle different graphic cards. ATi’s drivers are also well known for OGL bugs.

For example, a GPU renderer is guaranteed to use 16bit floating point blending, but guess what? this feature isn’t even defined as part of the D3D9c api, but rather a hardware implementation choice. That’s why the geforce 6200 and geforce 7100 are advertised as full ‘DX9 cards’ without having this important feature.

yes the Ageia card is a card dedicated to physics, but it uses its own phyX API and is not opensource. IMO the current market share (and price) of the phyX card is pathetic, for 200 dollars, it’s better off to get a faster CPU to do the physics calculations.

PCI-E and AGP is simply a bus, there is no reason why a hardware renderer will work on PCI-E cards but not AGP. But AGP cards are slowly getting phased out and all the new cards are coming out on PCIE only.

Another GPU renderer:
Parthenon Renderer (GPU accelerated global illumination renderer)

http://www.bee-www.com/parthenon/index.htm

Um, Indigo?