Blender Render + GPU Acceleration

…u mad?

no not mad

they just do not know, that they do not know.

You’re right. First of all, in many ways YafaRay is not ready for animations, the first reason is the lack of raytraced motion blur. Implementing raytraced motion blur is going to be hard, at least in YafaRay. Means writing and rewriting all around. There are other reasons, though. Many GI algorithms will have to be tunned to work consistently in animations, YafaRay scene description specs does not support transformation matrix. A hugue task.

About GPU rendering, I’m sceptic and enthusiast at the same time. With common hardware configurations, for instance, desktop computing, GPU rendering is not going to deliver the boost some guys are touting, mainly with complex scenes. You will need workstation-grade hardware all around to get significative gains.

On the other hand, it does have interesting capabilities. The first and most important, is to make raytracers run like game engines, for real-time purposes. The question many raytracer developers are making themselves at this point is whether you want real-time or animation capabilities, and what is the oportunity cost when you make a choice in either direction.

With current hardware specs, people are writting different raytracers for each purpose, rather than a raytracer working all across the board, which is rather silly. Because it means you have to branch your software in two engines, working as a single one. Not good.

Nobody knows what the future will bring. AMD is going to release GPU on the CPU die. Multicore CPU will keep on growing. Larrabee is dead, but maybe someone wants to give it a second chance. The raytracing field is becoming increasingly complex, any decision has got an increasingly expensive oportunity cost.

i really liked the furryball videos. not everyone needs raytracing. it would be nice if blender had better opengl/glsl rendering suitable for final output. probably such a renderer would also be much easier to write than something like yafaray since a lot of things (like rasterization) the programmer wouldn’t have to care for himself. but i am not really an opengl programmer so i don’t really know. :slight_smile:

What the Blender Internal renderer REALLY needs is to get sorted out on a more basic level. Yeah, the BI is great for basic animations without ray-tracing. But even then it has a lot of problems still.

I still like the idea of separating it so that it can be worked on as a GSoC project in its own right. From what I’ve heard, a large part of the problem with working on it (though I’m not a coder) is that the code is such a mess.

Once it can do half the stuff that Yafaray can do (caustics, non-distored image mapping) etc., then we can think about accelerating it. Putting GPU acceleration on the existing code is like sticking a rocket engine onto the back of a garbage scow.

I get the point that rttwlr is trying to make. For what is worth, I enjoy Blender Internal Render, I think that great works can be done with that, but I really miss GI. In my opinion is important to highlight the positives side of this great tool that we have for free, but what did this a great tool so far is that does exist a good communication between the developers and the community. I hope that the rendering engine gets more attention for the next release cause the modeling, texturing and animation on blender are awesome to me, simple, fast and intuitive. Cheers to the development team and the community!

Well said. I wasn’t suggesting at any point that GPU-acceleration should be added to the existing code of BI.

Having done many commercial animations I’ve faced too many times the situation where you have to explain to client ‘what rendering is and why it takes so long to get results’. And who can blaim them when they see their kids play Gears of War at home and I’m struggling with a simple logo animation. Yes, the reason why games look as good as they do has to do with prebaked lighting, lack of raytracing and all that optimization but still, as FurryBall, ‘DX11-renderer’ shows, the realtime methods used by game engines can really be useful in production. Less ‘physical accuracy’, more faking.

As we all know, seeing MLT/BiDir based indirect lighting produce breathtakingly pretty dispersive caustics is definately better than sex. But of course, when you have 25-30 frames per sec you are likely to seek for faster solution. And I’m still not convinced that the so-called biased rendering is the best way to go to get results faster.

It’s always nice to leave your workstation render an animation through the night just to find an unusable flickering animation in the morning. So my intention was not to diss BI, but point out that this kind of tool may not be the best way to go. mental ray and VRay are good, but they do have their own problems. I’m still amazed that after all these years something as common as glossy relfections are so slow to calculate.

One can produce amazing results with just about every renderer out there but that doesn’t mean the renderer is neccessarely really good. It just says that the artist has skillZ and is able to use his tools efficently. Of course GPU won’t solve everything and there’s things like memory limits that can be a huge problem. But it’s interesting territory and who knows what the future holds.

Who said AO is the best way to fake GI? http://www.youtube.com/watch?v=Z9u8EdFbmiI

I like your style, rttwlr…

Thats a great idea. I’ve pushed for that before in other threads. The Blender game engine can do some amazing stuff (parallax mapping, god rays, etc) but you have to be an Open GL programmer to get at it. If we could enable it simply for the viewport, we’d have pretty much instantaneous renders that look amazing. Every time I’ve suggested this I’ve got shouted at in other threads. Nobody looks. They all just want a physically accurate renderer (even though we have a few already).

The biggest laugh from the “physically accurate only” crowd is that they are crowing about the abilities of the Mitsuba renderer. I bet a lot of them have no idea that the wonderful looking (and nearly instantaneous) preview render that you get at the beginning is all done with Open GL !

You guys now what biased opinions and style of argumentation is?

Just checking …

Because I notice everybody of you guys compares GPU rendering to
the application in character animation.

What about accepting GPU what it is and what it is not intended for.
Could save some embarrassing comments/judgments.

Maybe a visit to SIGGRAPH or other 3D solution presentations can help
to clarify what is going in the industry and not only on your home PC.

A lot of the people who want a physically accurate renderer in Blender are ignoring the fact that Blender is an animation programme at heart. Yes, Arch Viz people need a physically accurate renderer for stills, but they completely ignore the fact that we also need a way of doing quick animations. They just don’t want to know. If you point this out to them and try to suggest other ways of doing this that don’t take 30 hours per frame to render, all they come out with is “… but it’s not physically accurate” that and “… in a few years, Moores law will sort out your animation problems, so stop trying to stand in the way of progress”. None of this solves the problems of the guys who need something quick, dirty and cheaty that will get thier animation out the door in under 20 years time. For animations, cheating is brilliant, sexy and very smart.

The really annoying thing is that we already have Lux Render, which right now will give you realistic results. It takes a ridiculous amount of time to render, but it looks stunning when it does. It’s an amazing piece of kit… but even with the promised GPU speedup, I don’t think it’s going to be that suited to animations.

Both what you say is true.
I was more directing the statement towards those which are against GPU or GI in general.

But I am not sure if Blender is for animation anymore or only.

However I do not think that so many want unbiased compared to simply an
improved material and render system. Ton as far as I know from talking
with him is very much in favor in upgrading the render module in case
somebody could do it.

Only because some who are new to 3D or do not grasp it fully want the features
they read about, and maybe repost their wish, others who are more educated in that
are do not need to be alienated. I am pretty sure the true unbiased community who
really want it in Blender is rather small.

As annoying some of the unbiased evangelist can get as annoying are also all those
anti-GI and anti-GPU fighters.

VRay, MentalRay, all those are also used for animation and they offer
also GI. Thus I cannot understand when people pull the line GI is not usable
and we do not need it when the industry is already using it since a long time.

GI can mean many things, from basic color bleeding, to caustics
while I am very sure first one is pretty usable and for the last one one must be
insane to consider it for an animation.

I think we are here on a common ground, a flexible render pipeline which can
turn on off specific features and thus produce faster, or slower renderings
depending on the options you select.

if you guys knew what the word unbiased meant in terms of rendering I wonder if you would go on so much about it.

Yes.

Good article about the subject:
http://www.thearender.com/cms/index.php?option=com_content&view=category&layout=blog&id=12&Itemid=38

That’s rather short but here is another one, http://www.cs.caltech.edu/~keenan/bias.pdf. In the context of most modern ray tracers it is a pretty misguided way of looking at them if you try and label them biased or unbiased. You just have to change what render mode you are using and the same render engine can go from biased to unbiased. Lux and Yafaray both offer photon mapping, bi-directional path tracing, direct lighting and path tracing to some degree. Though some methods depending on the render engine are more developed than others but the are there. I have never used them but am sure vray and mental ray would have a similar list of integrators.

But as far as BI is concerned the are not many people working on it. So I think for the short and medium term learning to love what you got would be the best way to go.

I am not an animator but in terms of animation shouldn’t you guys be whining about motion blur and micro poly displacements more. unless you guys like you movies looking like the are in game cinematics

there’ more into GPU computing than rendering, it can chew through your fluid sims and particles at a rate your CPU never can crunch. besides that I think the interactivity you get with most of the lightweight OpenCL based renderers are nice. /besides the objects are only MIT Buddah or rabbits :P/

why would you ever want to render motion blur? that’s … for me awkward and backwards. I wish renderers to totally remove any motion blur from the rendering. that’s a post process effect I can control myself in compositing.

you CAN do animation if yafaray. pretty nice ones if your name is matray.
http://www.vimeo.com/9863121

@Tyrant
I wouldn’t really back that up- Octane still cant render particles, hair etc so it can’t completely replace BI.

From the article linked by rttwir:

… The quality comes from other elements though; materials, textures, geometric complexity and of course user skills and artistic talent…

This. The bestest, fastest renderer in the world won’t make my crappy models and inept texturing look one bit better.

That is only true for the most simplest of cases.

Aermartin

as with dof there are ways to cheat with motion blur as well but
depending on the realism one might look for it will not work in
all cases. Take a look at how motion blur via vector is computed
and you can see the problem. Same with DoF.

I feel having both is still good.

Agentmilo,

who says Octane is there to replace Blender Internal - where is that written?