GPU and soft for rendering ?

i need some clarification for Video hardware !

what’s the difference between GPU and Video card for video ?

is it the same thing ?
and is blender able to use the full power of theses GPU or Video card ?

i mean i think right now the internal renderer is using moslty CPU power

but if you Have a good video card it will help blender to do faster editing in viewport

but where does theses GPU fits ?

like on my vista 32 bits i got these these chipset form intel G31/G33 chips and blender seems to be working a lot faster

now are theses G31/G22 chips considered as GPU chips or just simple videos hardware?
but not really usefull for blender rendering yet !
is there any planning to have some soft for rendering with GPU in blender ?

Thanks and happy 2.5

GPUs are the processors on the video card. In the past the CPU did it all, but specialized processors were made to do heavy floating point calculations (which is what many games and 3D content creation software need now a days).

Newer mid to high end video cards are outfitted with GPUs and the names of vid card are GPU and nearly interchangeable now.

Blender does not take advantage of the GPU (yet?) and so uses the CPU for rendering. LuxRender has in development SmallLux GPU which takes advantage of both the CPU and GPU for rendering through OpenCL and many commercial applications have taken this route as well. To take advantage of the GPU special software is usually installed in order to stream information to the GPU. For nVidia cards it is CUDA and for ATi it is Stream.

So there you go. nVidia or ATi mid to high range cards (gaming cards) or the professional cards have multi-core GPUs that can help out as long as the software you’re using can take advantage of it. High bandwidth video DDR III RAM is good for buffering the info and streaming it to the GPU cores.

Hope this gives you an idea of what it is all about.

The best analogy to visualize it would be:

mainboard : CPU = videocard : GPU

GPU = Graphical Processing Unit. It is a part of the video card.
The GPU consists of shaders, you can imagine those shaders like tiny CPUs, but instead of being made for integer calculations, they are made for floating point operations and are very fast with divisions and multiplications.

Thats why GPUs are used for rendering lately.

Basically a Video card is like a second computer.
Your computer got a mainboard, processor (CPU) and memory (RAM)
Your video card got the card, processor (GPU) and memory (VRAM)

At the end of a day a mainboard with cpu and ram can do exactly the same as a graphics card, but much much slower via software rendering, while the video card uses hardware rendering to display graphic.

The intel chipset you got is basically a gpu, but instead of a video card it is sitting on the mainboard and instead of video memory it shares the RAM from the mainboard. Sometimes intel gma´s got native memory as well, meaning there is VRAM sitting on the mainboard.

It should be able to work with OpenCL, however the task ahead would be like pulling a trailer with a wheelchair.

so mostly theses new GPU are sort of new Geenration of integrated chips specialized for graphic processing like the intel chipset G31/G33
and this cuts down on the size of the video card and better integration with CPU being directly on the mother board

would be nice to see some parallele floating point math being done on the GPU chip itself
that would accelerate the speed by a factor of 1000 at least

hope intel comes out with some new design for this soon !

also hope blender can come up with some way to interface with these GPU
and give blender a super fast rendering speed!

happy 2.5

No GPU can handle a high processing like a full movie rendering of physics (I mean in 4K rendering or more Real 3D resolution) etc , the CPU it is the main processing unit of the PC or MAC and the CPU it is the most advanced computation part of it and the most expensive.

GPU or Graphics Processing Unit or simple VIDEOCARD it is only for express computation of ploligons and matricial mathematics and it is in no way equal to a CPU unit , it is only an complement to it like your eyes are for your brain.

In the future GPU and CPU will merge togheter in one single computation chip.

CPU it is always neeeded , GPU will get obsolete in future in the form of “Videocard” will still exist like a part of the main processor.

I am not sure what you are talking about.

GPUs are not new. GPUs are around since the 1970s.
GPUs with shaders are “invented” by nvidia and the big boom was the NV10 GPU in 1999 on which basic principles all current GPUs connect.

Then, the intel chipsets are the bottom of the jar. They are simply on the mainboard due to economic reasons. To clarify it, intel GMA is utter crap no fancy new integrated stuff.

Also floating point math is being done on the GPU since 1999 extensively, you just had a pipeline where you transformed and lightened your gemetry and on the other end pixels came out. Nowadays you got full control over the GPU and its shaders.

With octane and smallluxgpu in the start, I really don´t think we need GPU accellerated rendering in blender internal, and now with brecht gone to octane I don´t see anyone dedicating his brainpower towards it.

And Nunarzk is partly right. I don´t see cpu and gpu merging, as they are in different companies hands, and intels larrabee got iced. However, the tasks a GPU can handle are very specialiced and the future lies within CPU+GPU+cloud computing to achive maximum power.

your right GPU chips have been there for some time
but still are using the old sequential calculation to do float points math
as i know of !

what i’d like to see are new Chps with parallel ALU float math
which would definitively decrease the calculation time
and hopefully new design for GPU will integrate thish igher speed to give much better performance for graphic application like lbender or other CG / CAD soft

Thanks and happy 2.5

Check ATI-AMD > AMD Fusion then rethink. :wink:

Here http://www.amd.com/us/press-releases/Pages/amd-demonstrates-2010june02.aspx

CPU does basically everything.
while the GPU is a CPU just specialized on only doing graphic related work.

The CPU has RAM on the motherboard the GPU has its own RAM on the graphic card.

In that respect they are not really that much different and Intel is planing as AMD to produce a
CPU which can include both CPU and GPU in one. I am not sure when that will reach usable
levels. I am very curious about the heat production and how they are going to tackle that.

With that AMD and Intel can beat out NVIDIA but I am very sure that GPUs will always be there
for high performance tasks.

very interesting facts on GPU for near futur

interesting that intel and other are pushing this technologie a lot
and hope to see more parallel power comes into CG GPU

i mean this can be only good for CG and blender
and with new Integrated circuits with smaller lines at less then 30 Um and soon
at level of 20 or 10 Um it will means faster and smaller less power ungry chips

As cekuneen said heat is the major enemy here but with mask made wtih smaller lines at less then 30 Um it does reduce power a lot and can even work faster
so would not be surprise to see more pwoerfull and faster GPU very soon!

so hope theses can come out very soon to give more power for blender !

happy 2.5

In that respect they are not really that much different and Intel is planing as AMD to produce a
CPU which can include both CPU and GPU in one. I am not sure when that will reach usable
levels. I am very curious about the heat production and how they are going to tackle that.

Have you heard of the Mars2 motherboard being developed by Asus, having dual GTX 4xx GPU’s built right into the motherboard instead of separate? Half the board is devoted to making sure the thing can even get enough power, I imagine many who use it will get a shock when they see the electric bill.:spin:

There’s also OpenCL, which is designed to offload some rendering and physics work to the GPU, the hundreds of processors on the GPU can mainly do just one thing at a time, but the advantage is that they can do that one thing really fast.

way back in the dark ages (I’m an old dude) I did a little work on taking some code and trying to multithread it. one thing I learned was that raytracing is among a group of computing tasks that are considered “trivially parallel”. this refers to the fact that if you have 2 cores you can just split the scene into a left and right pane and send one to each cpu to render, then merge them back together. same with more cores or a render farm.
but most of the things we want to do with blender are not so trivially parallel. any kind of simulation in particular. what happens in one part of the scene depends on what happens in other parts of the same so you can’t just split them up like you can with raytracing. the dev’s have made tremendous progress in overcoming these problems but it’s quite a difficult problem. I suspect your going to see the awesome power of OpenCL and the new high end GPU’s confined to rendering for quite some time

First no one sane buys a Mars2 for GPGPU.
For computing you buy a nvidia Tesla system, as the consumer graphics segment GF100 are choked in their FLOPS while AMD is not, so in terms of OpenCL its far superior.

Secondly, parallel computing is the whole idea of GPGPU else the hundrets of streaming processors would be useless.

And finally, AMD fusion has the huge disadvantage that you have one high priced piece of hardware where when you need more of either graphic or cpu power (hardcore compiling or bleeding edge gaming) you can´t upgrade and have to buy the next powerful version, spending money on something you might not need.

Also with Tesla you can get a lot more vram. 1 or 2 GB is simply not enough when doing more advanced stuff. But I sure we will a increase of memory in new cards as GPU acceleration gain momentum, heck I wouldn’t even be surprised if we’ll see expandable memory slots for graphics card too…

I would see an analogy or comparison with this “good old” SGI machines. SGI has long been at the very top of computing because there was that specifically associated graphics chip to the processing work. That was making it possible, allready in the 90’s. You could litterally see things happening in “real time”.
This was overwelming.

Sorry, it came out as a double post! :o

Yeh, overwhelmingly expensive :smiley:
So is Tesla.

And the strategy nvidia follows really disgusts me.
While I understand, that corporate solutions like Tesla are more expensive due to the GPUs being top of their charge, especially with the bad yield GF100 has/had and the electronic parts used on the PCBs ensuring 24/7 relieability, I really hate it, that they choke the consumer cards, just to ensure the industry has to buy expensive tesla cards. And on top of that, for consumers they give the choise to use OpenCL with faster AMD cards, or CUDA with a GF100 in chains. IMO all it does is to drive people away from CUDA even faster, because the average octane user is not going to buy a Tesla blade to get nice performance when he can have the same performance with OpenCL and a costumer AMD card.
Apple and MS combined haven´t been as evil lately as nvidia :smiley:
That said, I got a Fermi because I needed some of its features for work. So in the end you not really got a choise -.-

For rendering with blender and lux/openCL in the future?
A box with an amd hexacore and 2 or ATI cards, big bang, small buck.