Blender, render with gpu, or die!

if it doesn’t fit your needs, you’d better find software that does, just because blender is open source doesn’t mean someone will code in every feature you want. And, more likely than not, blender has what you need, if you’re willing to do a little bit of work.

It does not have to fool-the-eye to the extent of appearing “this is real.” This is not Pixar. It does have to accurately and convincingly portray the intended subject-matter, and it has to be believable. It also has to satisfy the eye of a gamer, and the expectations set by games in the minds of the museum-board members. They see graphics like this “popping out in real time.” They know it can be done. So do I… I have to use the GPU. I’ve found some very creative ways to do rendering with the GPU using GameBlender. But you know, I’m not using GameBlender for its intended purpose (games). I’m using it to get to the GPU.
Ummm,… the gameengine’s intended purpose is anything to do with realtime graphics… blender is meant to do animations and renderings. It sounds like you want a user to interact with this, and for what you’re describing, gameblender should work just fine even without this all powerful gpu.

Has anyone noticed that nVidia bought Mental Images?
If wonder if that’s a hint at how easy it is to establish GPU rendering in production environments :wink:

More like if they did not then Autodesk would have.

No, the user does not “interact with this.” As I said, I’m cobbling up things just to get to the GPU.

And realistically, I’m going to have to solve this problem sooner-or-sooner. My needs definitely favor rendering-speed over rendering-quality in the general sense. I’m certainly open to suggestions that I may well have overlooked. I’m not here to slam Blender, and in expressing that it does not meet my needs (when I think it could and should), I don’t consider that to be a slam at all.

It’s obvious that you are not slamming Blender, however it does seem that Blender doesn’t currently meet your needs and it may therefore be a wise bussiness choice to evaluate what you can do about that, if anything.

Keep in mind that the issue of adding new functionality is a complex one and finding a way to drive development in the direction that you want, even if you are willing to write the code yourself, is not always clear.

It’s a tough call.

Cheers,
Briggs

This probably needs it’s own thread…

Unless I’m missing something, this appears to be a Good Thing ™. Especially keeping Mental out of the hands of a potential monopolist (Autodesk). Just think, if they can make mental ray run through a gpu, they’ll sell HUGE amounts of hardware/software bundles. They may yet be the new SGI (with smarter financial moves).

What i think is that Blender should use one’s graphic cards more to speed up the realtime display within Blender and to make the graphics approximate the end result more closely. The way the lights work for example - i suppose they could be much closer to the end result, if Blender made more use of GPU capabilities. Or having reflections shown and so on… Like TrueSpace does it.

Blender can do anything you want it to do.

If it can’t - then you have to consider this:

Blender is an OpenSource community effort project - meaning the developers
develop what they think can be useful for them. If you can’t “survive” or “make a living”
with it today because you think it’s not with the times you and everyone else feeling
this way are VERY VERY welcome to make an effort in coding - just like the coders
have done so far, this is by far the fastest path to success in Blender and yes - therefor
Blender can become anything you want.

Blender WILL get there:

The Rendering engine and other engines have been one of Blenders hottest topics,
and because of this - the coders have made a HUGE effort in improving the way
Blender interface with other renderers and are working to insure that it will be as
'easy as possible for other coders (eg - new ones…perhaps like yourself) to
implement…eg…commercial renderers or plugins for them, exports etc. You just
can’t expect opensource coders to implement render support for commercial
products themselves however as this isn’t really par with Blenders GPL OS nature
and intentions.

It isn’t all that:

Some of you say that you can’t make a living without the GPU rendering,
well - maybe YOU can’t - but many of us can - and we do. As already mentioned
in this thread GPU rendering isn’t always what you need or want.

Did you know that some of the most professional in the business doesn’t even
use GPU acellerated renderers at all? Did you know furthermore that they rely
less on Global Illumination, raytracing and other cpu-expensive tasks than you
may ever withness or understand? I’ll explain:

While it’s absolutely true that in order to GET PAID we must stay competitive
and deliver top-notch artwork - otherwise our competitors will “take home the bacon”,
it is pointless to continously blame the tools and the means to get us there,
in most of the cases WE ARE THE TOOLS as well - so by improving our
lighting work, shading and texturing - we can achieve AMAZING stuff - furthermore
most of the cool looking effects you see in movies aren’t really a fancy render-park
with lots of GPU’s but done “smartly” by professional compositors and a lot of
afterwork to make it all “shine”.

When you’re on a tight budged and deadline - you have to rely on yourself,
if you think it’s the render engine…consider this …most of the times it’s not
it’s usually your SKILLS…take it from me… I’ve been a while in the business
and still am, I am not all that super-skilled yet even though I get paid…but I keep
trying instead of whining about things that are missing.

If it’s an issue - we’ll just purchase the “expensive” software needed, after
all - losing customers is even MORE expensive, but alas…it’s usually the
skills that where lacking every time…and not so much the software…hard truth
but the truth nevertheless…

Hey kids, for future note, anyone who mentions professional film, or Pixar in the same breath as “me”, “my” or other personal pronouns should have their facts straight. Here is an insider pic of what Pixar uses to render. If that looks like your bedroom or office, congrats. Otherwise, you have no idea how to compare.

Secondly, Pixar does not use GPUs to render, they use Intel CPUs. GPUs are highly proprietary and the driver software is very different for each brand and subject to massive change every six months. OpenGL is our chosen common denominator. If you can write flavors of the OpenGL drivers to drive GPUs, DO IT AND SHARE IT AS OPEN SOURCE. Otherwise, stop with the ridiculous comparisons.

GPU rendering is a good thing, and can already be useful to an extent now.

Seeing some of the glsl shaders bandied around on this forum, even effects like sub surface scattering are do-able at pretty good frame rates on a decent graphics card

If blender had the ability to render to a texture (i hear someone has this working on windows) then you can open up some pretty good shadowing options.

Further, some of the glsl post processing stuff is looking very good right now.
with a bit of extra work that could lead to all sorts of realtime post post possibilities.

There are three “problems” with this right now:

1)is that to make use of it you have to know how to write and use GLSL shaders… If your business depends on this you can learn!

2)this is perfectly workable if your aim is “near realtime rendering” right now to turn on all the bells and whistles will eat a realtime frame rate pretty quickly… although you could render antialiased at high def hundreds of times faster than doing it “properly”

  1. you may need to do much more initial setup than you’re used to…radiosity effects or AO can be baked offline and stored as a separate texture pass… you may need to prepare lots of environment maps to get good reflections if you know what your doing this initial outlay on setup will soon be made up in the speed of rendering the results you may need to get much better at maths to write refraction shaders etc…

concluding:
I think that there are many parts of this process that could be “eased” by the development team, but as others have mentioned, putting the effort in yourself you can already get better results with gpu rendering than many in this thread seem to think.

at the very least, an outlay on learning will make the first project tough, but once learned every future project can be “ripped through”.

Right now you may need to composite some passes rendered on the cpu with others rendered on GPU… shadows for example, or raytraced reflections if you absolutely need them… does this stop it being of any benefit right now? hell no!

you may need to write your own shaders to get a fast gpu rendered depth pass… or you may do that on the cpu for example

I’m pretty sure that the initial outlay in setup will easily be recovered by the fast turnaround in rendering
My next project I aim to prove a that. (researching right now WRT realtime)
as for big studios not using this technique, it’s only coming into its own relatively recently and come the end of the next year or two I think it’l have been seen a lot more!

It’ll certainly revolutionize smaller companies though who are at the “sharp end”

just my thoughts for what they’re worth

I bet there’s also a lot more cabs with computers hooked up into them than what that image is showing.

To think they let visitors look when the cabs have no locked doors on the front or back is crazy though. 1 row = about 351 computers, btw. No doubt some are multi-core. I’d expect they have several more rows, some full of Hard drives.

I don’t really get people comparing graphic cards to general use CPU’s. Try run blender on a GPU. Infact, try rendering anything on a GPU without a basica all purpose CPU. Yup, you wont be able it.

Joo mean, $0.02. No?:wink:

fyi. Blender GUI is opengl :stuck_out_tongue:

Stop bîtching! This is why so many outside developers don’t want to help put glsl or plain GPU computation offloading into blender. To many politics. To much crying.

Someone, please just sit down and make an effort to add it in and see where you get to. It might be easy for you or not. I my self want it but don’t want to try anymore.

Other fyi. Pixar and all of hollywood uses those Huge render farms to render MEGAA HIGH RES, higher than blue ray crap , images. More over called future proofing, so when the tech catches up they can re-release to that size format far easier and complete.

So true they do use those for a single frame. But for small projects that even pros shops spitout at HD we would not need stuff like that yet for the most part. render farm yes. Render city no. Would be nice :D. But hell I can’t even get blender to max out my cpu for rendering

The thing that drives me most towards Blender, and has for quite some time now, is the fact that it isn’t a “one size fits all” system. That it does have the benefit of an international developer-community that is building “what it needs.” The fact that many developers are busy meeting many different “needs” is favorably reflected in the product.

I’m happy to see photos of Pixar’s render-farm. They do incredible stuff with it. I’m not Pixar. I’m doing video, and it doesn’t even have to be “photo-realistic.” What it does need to be is, well, “frames per a-few-seconds.” Not “a few minutes.”

Notice that I am not asking for ’ “the same thing” faster. ’ I’m not asking for “the same thing” that the software renderer does. I simply want hardware to work harder for me… Take the information in my Blend-file and make the hardware do as much of it as the hardware can, without a separate and cumbersome export. “Game quality” is more than good-enough for my purposes.

I am watching with very-intense interest the improvements that seem to be active in the “3D preview” feature, hoping that this might be part of the solution that I need because I do use that feature to produce animatics. Being fluent in Python and so-forth I’m watching the various Python bindings … Python-Ogre, Python-OpenGL … but always being checked by the notion that “I should just be able to push a button and get this…” I’ve found a lot of GameBlender “tricks” but they all do feel like “tricks.”

You and anyone else in this thread are 100% free, if you feel that urge, to code
a GPU renderer yourself or to hire a team of master programmers to code it.

Actually the onlu gpu accelerated renderer is Gelato, neither commercial (paid)
3D apps have one, however the whole community would thank you for such help.

P.S.: the more badly you need it the more you’ll be willing to pay to have
it done soon, the faster the whole coomunity will thank you!

professional film me my i pixar

and I thought the six I was linking together was neato.

Goto Hell Pixar, with shiny amazing toys.

Interesting…

While stumbling about the Internet, I “stumbled upon” the GLSL Shaders feature (Google Summer of Code 2007) and it looks like this is a whole lot of the thing that I am looking for.

e.g. http://blenderartists.org/forum/showthread.php?t=99584

http://wiki.blender.org/index.php/User:Maike/SummerOfCode2007

I get the impression … and it’s not quite clear to me … that a very substantial effort is being made to bring GPU-acceleration into the rendering pipeline as a node-type; that is, not to replace the existing CPU-based rendering but to augment it. Which would of course be the best arrangement.

What other stuff have I “not yet stumbled upon?” :rolleyes:

Edit…

Specifically what I was looking at was this:

http://www.k-3d.org/wiki/Google_Summer_of_Code_2007#Near-Realtime_Rendering_Engine_Using_GPU

:rolleyes: Uhh, am I like totally wrong product here? :rolleyes: Looks like it. :slight_smile: Me bad… But I’m glad that I stumbled across it.

It looks rather like you could do a lot of this sort of stuff with … Python. In Blender, that is. :rolleyes:

I do see a lot of interesting stuff happening with preview, to the point of a video being on YouTube in June. All of which is obviously using the GPU quite heavily… and not intending to replace the CPU-based pipeline but to augment it. Quite bewildering…