what rendering improvements needed?

Love the actual shading with all it’s tricks and hacks, but the shading recode would be the first improvement to go for imho.

Point-based approximate color bleeding would be cool.

One bug I noticed, but not a bug. is I had a material with mirror props, and I saw another object in the reflection. later on I added post processing effect - focus blur. the object got blurred, but it’s mirror image in the reflective surface of another was crystal clear (as renders are, perfect in focus).

that’s weird. and prolly impossible to solve as focus is a post process effect. or can you get the alpha chan of a object, reworking it’s path in reflective surfaces via raytracing ?

anyways, it’s not a physical correct renderer i know, but small stuf like that. to mimick.

I think that we need a quick and dirty approach to some kind of GI for animation purposes. It doesn’t need to be really physically accurate or correct. Lux Render and Yafaray will do that nicely right now and by the time the Render API is complete we will have a variety of different high-quality path-tracers, photon mappers and ray-tracers to choose from. Don’t bother with getting that physically correct stuff into BI. I don’t think the BF can really muster enough resources to beat the likes of V-Ray. The focus of the BI should be a FAST method of faking all that crap that focuses on getting quick results for home animation enthusiasts. People who don’t have renderfarms and have to get render times down to a few minutes per frame on one or two machines.

  1. Either clean up what was started with the Render Branch and add in what is missing to get it up to scratch.

  2. Dump it and go for a something based on Renderman. This would include some way of creating and editing RIB files in Blender. The reason I suggest this is that it is used by a lot of firms in professional production and having Blender be able to work with RIB files and RM shaders would automatically make Blender a quasi-professional tool. Also there are lots of existing implementations of it, someone tried it already (Q-Dune), so it shouldn’t take Albert Einstein to get it up and running in Blender. Renderman is proven to be of industry quality and is an industry standard. It’s not going to be some weird hybrid thing that might or might not work or even see the light of day (like the render branch :frowning: )

GPU render or GPU+CPU aceleration, similar to lux render method. You will say: “use luxrender” , but Luxrender is not 100% compatible in the blender workflow, you cannot use layers for example. Also would be nice to render directly with the GLSL shade (effects and filters included).

EDIT:

Also IBL or HDRI shadows.

I think two things could greatly improve images quality.

Shader/material:
we should definetely go for BXDF implementation instead of the existing one. That alone would be a big step forward quality, even without true GI.

Renderer algorithm.
An algorithm for true global illumination should be really added to blender internal. Render25 branch Irradiance cache is nice but too limited to few cases as so far.

Path tracing (+irradiance cache): can do everything except difficult light conditions and caustics
Bidir tracing with/without MLT: can do everything, especially difficult scenes, though can take a long time to render
Regular photon mapping: can do everything, but is constrained by memory, which in practice makes it not that great for detailed scenes/ difficult lighting conditions
Progressive photon mapping: is roughly on par with MLT, better at caustics but worse at difficult conditions
Point based GI: only diffuse-diffuse, and some very glossy specular, but great at detailed scenes
Light cuts: mostly good for diffuse-diffuse, though can also be extended to do some specular, no caustics as far as I know

This seems to show that right now that if you want the ultimate GI solution that is able to light highly complex scenes with caustics and all GI-based features. Then you need to learn that you’ll have to wait a while as the fast solutions may not be able to completely pull it off.

So in other words you might have a quick and dirty GI solution for animation and a slow-rendering, but high quality GI for stills?

Well, at least you shouldn’t follow the buzzwords like lightcuts (useless because it doesn’t add full GI or even better reflections), Renderman (that’s for people who get an orgasm when they hear “Pixar”), MLT / unbiased rendering (that’s for more people having an orgasm, and there’s already LuxRender for that), full dependence on point based methods (not better than photon mapping / final gathering or something like VRay’s light cache), etc.

Like others, I think Brecht’s shader/material refactor should be finished first, because that’s what actually defines the look of the renders, be it realistic or not. After that, support for micropolygon displacement, because these days it’s all about high poly models (think Blender sculpt mode, ZBrush, Mudbox, Sculptris) and they should be renderable quickly without falling back to hacks like normal mapping or render times that are way out of line.

After that, indirect lighting. The real deal, please. Together with caustics, because that’s not just a buzzword, it actually makes scenes look even better when using those sexy new dielectric materials from the upgraded material system.

If you can find a developer who’s able to implement that, so not a developer who’s just starting out on it, you’ll have my money.

Render calculation made only on the displayed (and interacting) meshes or effects. Lot of time sparing to be found there.

Hey you guys you are missing one point,

blender internal renderer could use LuxRays only. That’s an OSS for raytracing only and it’s soon Open CL accelerated !! instead of using old blender internal renderer code for raytracing.

I would say that at first it would be good to have the render branch integrate in trunk, for animation, the speed vs quality is quiet good.

Then,
1.integration of a new shading system would be a real plus
2.For the algorithms, I have a preference for Bidir path tracing (with gpu using Luxrays, as aermatin points out?) .
3. Wouldn’t it be possible to finish the Lightcuts integration that is already partially done, from the last GSOC?

Yes, BxDF would be really nice. Also, not as much of a rendering engine feature, but I heard something about shader nodes…?

I can tell you why I usually choose the BI over external options: it’s fast when using buffer shadows for animations. Its a PITA to get materials and lighting set up right for a good clean look, and I like that its kind of a “hybrid” system allowing as much raytracing as you’d like to add. But once you have it going to get an acceptable look, it just gets the job done PDQ.

+1 for this… a node-based BxDF setup would be a great thing.

What about taking the API dev to a next level to make blender entirely API-based for output. Since rendering and shaders are so closely inter-related, why not move shading/materials to a plug-in-able system that would allow multiple shader options, keeping the current “hack” system while adding a modern shader setup alongside (and where others could also plug in, like Renderman shaders or NPR materials systems?)

What about moving the BI to an “external” engine, maybe even “spun off” into a seperate BF development effort. It would then plug in to the API and would work as an option alongside other engine choices, any of which integrates into the node compositor and layer setup.

Too complicated? Not feasible? IDK enuf about this to say, so I’m posing the question…

well, we’re talking about Blender internal, of course going like this then blender could use Indigo, or Yafaray, or Aqsis… but where is the point of that?

I think physically correct materials are the way to go, but not physically correct renderer imho.
There is no point in converting blender internal in yet another damned slow unbiased renderer, almost impossible to tweak for artistic needs.

Exactly. BI needs to be FAST. If you use a non-biased approach you are a. duplicating loads of work which has been done already and better in other renderers. b. locking out all the animators who want to render their own little home-made Pixar short.

If you want to render your Arch-Viz stuff or other still images, then you are really spoiled for choice - LuxRender, Yafaray, Indigo, V-Ray etc. If you are an animator who is looking to render their short in a decent amount of time and you want some amount of quality, then you are stuffed. Here we have a pretty good animation application, but no way to get your images out to an avi file in any kind of quality. That is silly.

You need to either carry on what has been developed so far or break and go with something established. Those are your two choices.

Why is it when ever we talk about realism you talk about giving up artistic abilities and influence and compare render times to be as slow as with systems like LuxRender.

Maybe you guys should work with VRay.

I set up a blurred material and it renders as fast as Blender can render a non blurred material.
Make Blender render it blurred and smooth and it will take longer then any GI system I know.

Only because it is GI does not mean it is slow or that much slower.

Also in Blender when you use the irradiance cache it can take a long time for that to be build
before you can render it.

In every good GI system there are ways to set it up more for speed if you want or more for
realism if you want.

But VRays raytrace core is light years faster then what Blender can put on the table.

BTW also VRay can produce stunningly realistic renderings I would say pretty close to what Maxwell can do.
Modern biased engines can be quite accurate and getting close.


After much thought I also put myself into the line of those who would think this way:

BXDF material system
Raytrace speed up
Irradiance cache finish
Micro Polygons also with correct shadow calculation

Possibly adding the photon mapper and FG by Farsthary in case the way how it is codes
makes it impossible or unsuitable to use.


And then from this standpoint we can see where we are and what else we can do.
I am not sure how easy for example LuxRays can be integrated.
How hard would it be to make a true node based material generator in contrast to the current material mixer.

Well, but I’m not excluding still at all!
I suspect the majority of blender users are producing stills and what they need is features to render better stills like true global illumination, micropolygon, caustics, etc. of course this should not exclude animation or speed.

Actually I think everything in recent render25 branch has been developed with this in mind, let make animations with speed as a priority for durian needs.

Hmmm… a little too much maybe???

Yes, speed is something that’s missing from the BI renderer nowadays. The renderer is good, but slow, compared with other internal renderers out there (e.g. Cinema 4D… as an example, their internal renderer is fast end have many features that serves both abstract artists (“GI, YAFARAY and LUX can go to H***!!”) and the realistic artists(“if not looks like YAFARAY, VRAY or LUX, just DIE!”), for an example see http://www.youtube.com/watch?v=ZifwQ9BJ6Kk).

Now probably this will sound pretty much madness but what about a full rewrite of Blender internal??. Yeah is not a small project, but also nobody will have a gun pointing to the heads of developers if this is taken to a separate branch. With all due respect to Ton, Brecht and Co., the internal renderer is starting to show it’s age and probably a recode from zero using techniques hard to implement in the present engine would be a good idea, plus with all the expreience gained, the final result will be better suitable for an “jack of al trades” fast render engine. And for develoeprs probably would be easy to follow a clean implementation than build over a base that (as some people has stated, don’t know if it true or not) very few and brave people will dare to approach.

Food for thought, nothing more than that.

BI should be able to render toony materials as well as perfectly physically correct pictures.
Blender is already good at toony materials, now what we need is real GI.

Shader recode, GPU render, Progressive photon mapping. Drool…

Maybe we could get this, when rendering:
It draws a highly pixelated version first, then draws a better version of it, and so on untill it gets the final image.
This way we could see our shaders in work instantly, and fix things we don’t lie, not having to wait forever staring at a black screen…
I think XSI has that, but i don’t quite remember…

I actually agree with the full rewrite…

free mind

we already had a progressive render engine but it was ditched for what ever reason it was.
but I found it quite usable because it acted like Hypershot and give quickly some approximation
to your finale possible rendering.