Blender, render with gpu, or die!

Blender developers have decided to keep on the old track, cpu rendering instead of using much faster gpu. Reasons for that have been quite obvious as they wanted blender to be as multi platform system as possible. There are numbers of graphics cards and it would be unwise to make blender capable of rendering using their gpus. Why? Well, we want our developers to do something else than that, and not just that.

However, in professional 3d use when there is a technological breakthrough, it is time to move on. No one wants to leave business just because their tools are getting too old and uncompetitive.

The good news is that there is no need to make hard decisions and there is no need to forget all the hobbyists, yet some work is has to be done for speed, meaning rendering via gpu. Read this and you wonder no more why:

http://www.forbes.com/businesswire/feeds/businesswire/2007/11/19/businesswire20071118005069r1.html

Still not convinced? Ok, then compare the good old cpu rendering to gpu renderer which is using even four graphics cards. You should be worried if it is ok for you to use months instead of hours just for a single rendering task.

While AMD combined cpu and graphics card, it is reasonable to make that system work with blender. Otherwise, in the future blender would be just a complex toy for no real use. Of course, we don’t like that kind of scenario. All those guys developing blender really want to make it cool and handy tool. Even so, let’s face the fact that for that purpose we should not ignore heavy users. By developing a lot of expert level features just for beginners is not reasonable. So, blender is really supposed to be professional too as well as for hobbyists.

Star Weaver wondered whether gpu rendering is capable of doing all required tasks:" I don’t see how they’re going to be able to help with ray tracing, caustics, radiosity, AO/GI…".
For a single frame this really might be a problem while it would require a lot of changes to blender’s rendering pipeline. However, for animations gpu rendering is really essential. Parallel computing can be used like rendering farms. There should not be those problems you mentioned and your computer would render several frames at the same time.

It is clear that commercial software companies take the full advantage of the latest technology, or at least of the technology their customers are willing to buy. Community developed program like blender is multi platform, which is good, but for professional use speed is essential. No one can’t afford to loose their customers and pay more for electricity. Note that this is not complaining. The technology arrived is new now it is a time to make good decisions. Basically, we choose users and the purpose of the blender by that gpu vs. cpu rendering. Why to bother developing expert level new features for beginners if it is not intended to be an expert level tool? It surely is great for now, and I’d love to keep it that way. Keeping things unchanged in this matter leads to the opposite result.

That article didn’t really look like it had anything to do with anything but realtime 3d.

Unless GPUs are going to be accessable as alternate general-purpose processor I don’t see how they’re going to be able to help with ray tracing, caustics, radiosity, AO/GI, SSS, and all the stuff that actually slows down rendering a lot. If all you’re looking for is basic light, buffer shadows, with diff and spec shaders implemented in some per pixel shader thing, maybe, but I don’t see much more than that And when my scenes only contain that stuff rendering is already super fast. …

Otherwise blender is just a complex toy for no real use.

Nice attitude. Point it elsewhere.

@ blempis:

Oooh, vinegar,
this will certainly be a more
effective approach
than honey

Referring to some kind of AMD marketing department mumbo jumbo is not really appropriate.
You should know this

  • only decent GPU rendering system in the market is Gelato and there is already Blender pluggin for it
  • Blender SVN has already number of new feature that just this year have been discussed and introduced at SIGGRAPH 2007
  • to say that blender is just a complex toy for no real use is very rude mainly to coders
  • you are not forced to use Bledner, go pick up and buy some commercial application that is already using your GPU rendering super hyper technology :-/

He might have some points though. For previzualization jobs Realtime shading (GLSL et al) are invaluable. No matter how fast the BI gets, XSI for example renders a 20 sec sequence in OGL with antialias and shadows and env reflections at about 15 fps.

Then there is the other side of the coin, and thats general purpose computing on the GPU. Nvidia has CUDA and AMD/ATI has its own solutions.

It wouldnt be wise to ignore the trends in the long run.

Why not do both?:spin:

It’s very hard to tell what someone’s attitude is online. Though, he would have made more of an impression without the last comment.

Locked because this thread is Flame-bait.

The topic is not a bad one (one that has been developed, and has been discussed a lot) but the manner in which the topic was posted was asking for more off-topic replies than on-topic replies.

As Alltaken mentioned, this was a bit flame bait. Well, my intentions were honest as I just wanted to point we can’t afford to ignore the technological development without consequences. Quad core and four graphics cards together combined with gpu rendering is a huge opportunity. Someone is going to use it. The one who is not, and staying with cpu rendering, has to offer something really remarkable to compensate the speed loss. What would that be? No, there is not such a thing(*. Therefore it is essential to emphasize in all available ways how crucial it is to have this GPU option if possible.

(* Even if there is a cpu renderer so remarkable, they would do everything to make it work faster for being competitive. Why wouldn’t they consider gpu?

Have faith that it will get there.

Being a free and open source software. It is all coded by volunteers. This means only requests can be made not demands or threats. The fastest way to get something into Blender is to code it in. Open GL rendering has had some big advances in Blender recently.

Alltaken

It seems that, but so far the integration of the glsl project is still at a stand still due to reasons already post in other threads.

Having some sort of GPU renderer would be a good idea. I’m not a coder, but I imagine that if you used OGL commands, rather than the specific API’s of the graphic card manufacturers, then you should be golden for multi-platform use (except maybe Vista).

Why not have it as a SOC project?

And on the subject of ‘toys’ I just had a look at my friend’s copy of Maya 2008. They have a ‘Brand New Feature’ which he was raving about, which is (I think it’s called) “Smooth Mesh Workflow” which is the ability to rapidly see updates to a low poly mesh in a special ‘smoothed’ mode, which is much faster and more adaptable then the old Maya way of doing Subdivision Surface modelling. He was showing me how this ‘new’ feature works, and I looked at his screen and I blinked and said to him:

“Thats just like how Blender does it”

He stared at the screen in puzzlement and then eventually said. “Hey, yeah it is!”

He then said that they had finally fixed the boolean tools. I didn’t say anything to that!

So please if you are going to bash blender and put down the devs, at least be knowledgeable about what you are saying.

there should be no bashing, just that some tools no matter how great, or how much a developer has put into them and posted patches, do they get bypassed anyway.

Inter had named layers for the longest, but they still got bypassed for whatever reason.

So we come back to gpu rendering in its two forms; one of the pixel shaders aka GLSL and the SOC project, and then the use of just the GPU for computational math and off loading the render algorithms to the video card.

Both of which need a full time coder to implement and upkeep.

Can these be done. No of course not! The only full time coders that are paid seem to be Peach team.

:wink: Ah what joy to have your heart crushed :smiley: Siggrapth is awesome but tech demos are just tech demos.
And Gelato was not really that fast either

I don’t think so. Sorry. GPU renders are a maintenance nightmare. You want to create constant support for realtime stuff for every video card or some junky average running crummy apis? Besides the GPU realtime stuff is fast but not the same as raytraced rendered graphics.

Now for the game engine for baked stuff, yes it’s a good idea.

I, too, urgently need for the GPU to be available to me as an option for rendering.

I should be able to choose speed over quality. I need that badly.

All this talk of GPU accelerated rendering interests me. However, I’ve searched and can’t find a lot of evidence that GPU acceleration offers any benefit to “non-realtime” rendering. The most prominent example of GPU accelerated photorealistic rendering is Gelato from nVidia, and most comments about that suggest it’s actually slower than other CPU only RenderMan solutions. My concern would be, if nVidia themselves can’t make it work, what chance has anyone else?

So, does anyone have any links to empirical evidence of the advantages of bringing the GPU into equation for non-realtime rendering?

Cheers

Paul Gregory

Here’s my take on the matter… here’s my business need… And let’s be agreeable here: I’m not slamming Blender, it just doesn’t quite fit my needs and I know it could. Furthermore, I’m willing to continue to bet that Blender will continue to advance faster than anyone else is doing.

I’m doing video. Two- to four-minute pieces for museum kiosk displays. It does not have to fool-the-eye to the extent of appearing “this is real.” This is not Pixar. It does have to accurately and convincingly portray the intended subject-matter, and it has to be believable. It also has to satisfy the eye of a gamer, and the expectations set by games in the minds of the museum-board members. They see graphics like this “popping out in real time.” They know it can be done. So do I.

I do everything in layers, so there are times when I do ray-trace a particular effect. But I literally do not have time to use it for too much; I have to use the GPU. I’ve found some very creative ways to do rendering with the GPU using GameBlender. But you know, I’m not using GameBlender for its intended purpose (games). I’m using it to get to the GPU.

And I can’t keep on doing that. Time is money. Money is food (and digital goodies!).

So, I do honestly understand the sentiment, “Blender must use the GPU or die!” It really does come down to that, at least to satisfy my needs (which I naturally think are quite valid). I ought to be able to, in one easy step, use the power that is built-in to my computer … and which Blender in fact can use. It’s not a question of “can this be done in Blender.”

All this talk of GPU accelerated rendering interests me. However, I’ve searched and can’t find a lot of evidence that GPU acceleration offers any benefit to “non-realtime” rendering. The most prominent example of GPU accelerated photorealistic rendering is Gelato from nVidia, and most comments about that suggest it’s actually slower than other CPU only RenderMan solutions. My concern would be, if nVidia themselves can’t make it work, what chance has anyone else?
these are good points. if i remember correctly the modo developers expressed exactly the same opinion on their forums in a thread where users demanded gpu accelerated rendering.

it seems like gpus (and things like cuda) aren’t ready yet. for example they only support single precision floating point numbers.

btw. nvidia just has bought mental image. hm… interesting…


but what really would be nice is if blender viewports supported shaders (i think this is almost done already because of the soc project?) and shadow mapping! this would be great for preview renders and i think for some projects which don’t require more realistic raytrace based lighting and shadows it could also be enough for final renders. if you take current realtime technology and don’t mind if a frame takes a few seconds to render then you can achieve quite nice quality already.

Eh said it all before. One focus you can try. Support the SOC GLSL code project someway to promote it into CVS.
Other ideas have no code for them yet like off putting computations to the GPU. So just try and get the GLSL stuff in.

It’s like hollywood. No studio cares until they actually see it then finally decide “wow we need that asap”

Allow me to add that there really is no point in doing this. Thats why we have external renderers?

Blenders development resources are stretched thin as it is and why invest resources in something thats isn’t going to benefit the community as a whole because there is no standard, cross platform SDK for it right now? I’d rather see resources used on the interface and tool development and leave GPU rendering to nVidia for now.

Just my $0.2

sundialsvc4: I don’t know if it’s complete/usable, but this might be worth trying – download, a GLSL preview version, turn on antialiasing, and use the “render this window” button (the one with the mountains at the far right in a 3d window.