Blender, render with gpu, or die!

For businesses, if you really need what you need and you’re serious, consider funded work on features apart from buying the oddball book and DVD.

/Nathan

yes this is true but its still demonstrates the principal besides these games get damn good looking results without ray tracing and im not claiming that you will be able to do full ray tracing in real time im just showing how much faster the gpu is than the cpu

and to show some actual numbers
in 2007 the fastest quad cores were only putting out 30 gigaflops whereas high end gpu’s put out 500-1000 gigaflops
a single ati HD4870 puts out just a little over 1 teraflop and a single 9900gtx from nvidia is rumored to put out somewhere around 1.5 teraflops

and just cus i feel like bragging a little my graphics cards put out 1008 gigaflops

Right, for the most part, your thread has some validity, GPU rendering would be nice. Here’s where I have a problem.

However, in professional 3d use when there is a technological breakthrough, it is time to move on. No one wants to leave business just because their tools are getting too old and uncompetitive.

Yeah, therein lies a problem. Not all of us blenderheads are actual businesses, some of us actually do this for FUN. I’m not bashing on you or anything, but really. Not all of us are interested in making it a business. It CAN be a hobby. It is for me at least. Not only that, but I DONT havea budget for a computer that has a halfway decent GPU in it, I’m running an ATI Radeon Xpress 200M card, integrated… So actually, according to my benchmarks, my rendertime would actually SLOW DOWN, compared to the CPU, as my CPU is an AMD Turion x64 Processor. That and I can renderfarm at my local netword with an intel x64 dual core.

So yeah, there’s my thoughs on it… What about an OPTION for it? Kinda like 3ds max, where you can select between D3D and OGL? Just a suggestion.

'Sides are D3D’s API codes not open source? That would mean we would have to start paying for Blender, because they would have to renew their license with Direct X. Catch me if I’m wrong there… But I thought that was why some of us unfortunate ATI users are still stuck with UBER slow OGL refresh rates because that is the only UI Blender Can use…

Blempis, we always welcome new developers which want to make blender better!So welcome.

But, if you are not a dev, you’d better make a good design proposal for such a rendering pipeline you imagine and post it to
www.blenderstorm.org :wink:

definitely, you shouldn’t be telling to someone to die, and not to our beloved blender. You could make us angry! :slight_smile:

He’s not telling you to die. It’s an expression. He’s saying go with evolution or become extinct. While not the best choice of words, it carries a certain weight, and grabs you’re attention.

Like snooze ya loose, Billionaire or Bust, etc.

it’s not as simple as saying GPUs are faster for maths. GPUs are faster for certain SIMD tasks they are designed for (which currently still is scanline/rasterization rendering and not raytracing). raytracing has totally different requirements and current GPUs can have problems with it. for example because they don’t handle branches in code well or because they miss fast big caches like CPUs. the future certainly will become interesting though. :slight_smile:

Nobody is doubting the power of the GPU, it’s a matter of cost vs. benefit. Porting large sections of a render engine to CUDA would be cool, but very time-consuming. It would also be limited to a certain number of graphics cards. In addition, CUDA may not be the right “horse” to bet on. On the other hand, if a group of people want to port a Blender-compatible render engine to GPU, go for it! I just don’t see the core Blender team doing this any time soon.

I hear that. Not just because its future as a standard is rather uncertain, but because alternative, standard paths are open and closer to reality. This may or may not be feasible, but if/when the GLSL previewer is completed, maybe it could be expanded into a second stage that delivers a thorough mixed CPU/GPU rendering solution.

maybe faster for certain tasks that chip is optimised for.

so is that a good way to address scalability?

Not sure if GPUs fit that criteria.

Heh,

I’m not too savvy about all the 3D terms used here for the rendering etc. So I won’t take part in the debate. I think however ultimately CPU power prevails… while we’re at it, would you like one of these for your rendering?

On June 26, 2007, IBM unveiled Blue Gene/P, the second generation of the Blue Gene supercomputer. Designed to run continuously at 1 PFLOPS (petaFLOPS), it can be configured to reach speeds in excess of 3 PFLOPS. Furthermore, it is at least seven times more energy efficient than any other supercomputer, accomplished by using many small, low-power chips connected through five specialized networks. Four 850 MHz PowerPC 450 processors are integrated on each Blue Gene/P chip. The 1-PFLOPS Blue Gene/P configuration is a 294,912-processor, 72-rack system harnessed to a high-speed, optical network. Blue Gene/P can be scaled to an 884,736-processor, 216-rack cluster to achieve 3-PFLOPS performance. A standard Blue Gene/P configuration will house 4,096 processors per rack.

…good luck rendering your high res movie on GPU on a render farm…doesn’t work? of course, render farms down have any NVIDIA/ATI installed :wink:

…good luck rendering your high res movie on GPU on a render farm…doesn’t work? of course, render farms down have any NVIDIA/ATI installed

True, True, but surely I’m rendering realtime at 60fps! who needs a render farm?
…someday. :wink:

Which would you rather buy? More CPUs or more GPUs? If GPUs can do what CPUs can do (and better than they can), why would render farms be stuck using only CPUs?

Cost would be the concern, but if the GPU does the number crunching, who says the CPU has to be high-end? Spend more here, spend less there. It COULD work out, not saying it will. :slight_smile:

By the way, some helpful links for GPU code basing:



And nVidia’s CS on CPU vs. GPU:

This might be a task I’m interested in taking up, it’ll really come down to trying to figure out if I can wrap my head around both Cuda and Blender’s coding. :slight_smile:

-Daniel

P.S. I think I mentioned earlier that it’s limited to the 8800 and up? I was told I was wrong by someone who knows it far better than I do, so I retract that statement. If correct, this should work on most nVidia cards with the latest drivers (but lack of cores on old cards severely limits the benefits :slight_smile: ).

wow if you would even try to take it up you would be my hero

AFAIK, CUDA is proprietary to Nvidia, so you should probably check the CUDA license to see if it is compatible with the licensing of Blender. AMD/ATI has a similar SDK called CTM (Close To Metal), which is (AFAIK) proprietary to their cards. It might be worthwhile investigating BrookGPU, which is completely free (BSD license) and hardware-agnostic. One of the cool things about BrookGPU is that it can also generate straight C code for situations where no GPU is available.

Good luck!

I’ve been looking for the License on CUDA. For the most part, nVidia’s been kind toward the open source movement, at least in my opinion, so I’m not too worried about this.

Theoretically… Even if the CUDA kernel came out under a different license, all it would take is to rewrite Blender’s render engine to allow external kernel use, and then distribute the CUDA kernel separately, right? (Kinda like Yafray, except that it’s not for license issues, right?)

Thanks for the BrookGPU thing, though. I’ll keep it in mind.

BrookGPU seems like a more logical choice atm, seeing as CUDA is far from an opensource release.
But currently openGL seems to be much less efficient for GPGPU then DX9
but still, this technology will change things :slight_smile:

Actually raytracing is perfect for GPU because you can trace many multiple rays simultaneously whereas the CPU has to do them sequentially.

What I envision here … for whatever it may or may not be worth … is a set of render nodes that are “hardware based.” In other words, when you put one of these puppies into your render, you are asking for those specific operations (whatever they are) to be performed by the GPU.

You are accepting … indeed, you are asking for … “a loss in quality” of some kind, in exchange for speed. Furthermore, you might not be asking for all of the speed that your particular GPU might be capable of delivering under other circumstances. You are not necessarily asking for real time output. You simply want the GPU to produce the result of that particular stage.

Perhaps these render-nodes are “plug-ins” of some kind.

I would very much like to see render-nodes of this type start to appear in Blender, because I feel that, once this ball gets rolling, it will quickly start to roll very fast. I suspect that there is a very-huge demand for this within the Blender community.

The hardware rendering capabilities of your equipment should not be limited “just” to GameBlender. Hardware can be very useful for much more than just “real-time” rendering. I would like to be able to put many GPU-based stages “up-stream” in the pipeline, reserving CPU-based ops for “down-stream” finishing touches.

I think that many of you are saying, so to speak, “well, if we can’t do every single thing that a top-flight program like Cuda can do, then why should we bother to do anything?” And I politely think that this viewpoint is very wrong. We need to think in terms of providing options that allow users who maybe have “less” computing-power at their disposal (but, say, a fairly-decent graphics card) to leverage whatever they do have.

A render-pipeline segment consisting of one-or-more adjacent GPU-accelerated nodes would, taken together, comprise (in effect) an OpenGL “program” that is to be executed per-frame, and any given render-pipeline might therefore consist of one or more of these.

Congratulations… you’ve resurrected a thread that’s four months old. That’s quite a feat! Perhaps you could help me raise an unstoppable army of the dead.