GPU rendering is a good thing, and can already be useful to an extent now.
Seeing some of the glsl shaders bandied around on this forum, even effects like sub surface scattering are do-able at pretty good frame rates on a decent graphics card
If blender had the ability to render to a texture (i hear someone has this working on windows) then you can open up some pretty good shadowing options.
Further, some of the glsl post processing stuff is looking very good right now.
with a bit of extra work that could lead to all sorts of realtime post post possibilities.
There are three “problems” with this right now:
1)is that to make use of it you have to know how to write and use GLSL shaders… If your business depends on this you can learn!
2)this is perfectly workable if your aim is “near realtime rendering” right now to turn on all the bells and whistles will eat a realtime frame rate pretty quickly… although you could render antialiased at high def hundreds of times faster than doing it “properly”
- you may need to do much more initial setup than you’re used to…radiosity effects or AO can be baked offline and stored as a separate texture pass… you may need to prepare lots of environment maps to get good reflections if you know what your doing this initial outlay on setup will soon be made up in the speed of rendering the results you may need to get much better at maths to write refraction shaders etc…
concluding:
I think that there are many parts of this process that could be “eased” by the development team, but as others have mentioned, putting the effort in yourself you can already get better results with gpu rendering than many in this thread seem to think.
at the very least, an outlay on learning will make the first project tough, but once learned every future project can be “ripped through”.
Right now you may need to composite some passes rendered on the cpu with others rendered on GPU… shadows for example, or raytraced reflections if you absolutely need them… does this stop it being of any benefit right now? hell no!
you may need to write your own shaders to get a fast gpu rendered depth pass… or you may do that on the cpu for example
I’m pretty sure that the initial outlay in setup will easily be recovered by the fast turnaround in rendering
My next project I aim to prove a that. (researching right now WRT realtime)
as for big studios not using this technique, it’s only coming into its own relatively recently and come the end of the next year or two I think it’l have been seen a lot more!
It’ll certainly revolutionize smaller companies though who are at the “sharp end”
just my thoughts for what they’re worth