just curious what specific renderer improvements you feel are wanted/needed to the Blender Internal engine.
just curious what specific renderer improvements you feel are wanted/needed to the Blender Internal engine.
> Finish / implementing AA for reflection, those are not clean.
I am not sure if this is shader model related. For example Blenders glossy materials are grainy and when smooth very slow. This might have something to do also with the Raytracer code. In Yafaray the glossy material renders significantly faster and produces also very good results.
That type of material would be nice to have since it is so important/needed to brushed and not fully glossy surfaces which haver very fine blurred reflections.
Because of Blenders slow renderspeed in that area those types of materials are difficult to render clean and fast and thus often downgrading Blender renderings. Those blurred highlight and reflection touches on the surface often generate those realist feelings when seeing that GI work.
This is important because nearly every object in real life is somewhat reflective and many surfaces show the color of the env inside the reflection and thus adding a color element again needed for realism.
I am, not sure if it might make sense to switch to the Yafaray shader model. While easy to use it also limits the tricks you can do with it.
In Blender can easily place extra lights and create fake but visually needed specular reflections where I want.
> Photon Mapping with Final Gathering
> Irradiance Cache with Light Cache
Photon Mapping might not be the most modern system but I had quite good experience with PM and FG with MentalRay in Maya.
The Irradiance Cache with Light Cache I mainly know from VRay inside Max and Rhino. This combo renders pretty fast but I do not know if there are any public papers on Light Cache which seems to be a Chaos Group inhouse product.
I find color bleeding as the current render Branch can deliver is not bad and very good for coloring the diffuse illumination for animation and some stills, but caustics are not possible with them.
The current IBL approach also seems to be a bit short in color definition - feeling a little flat but that could be me.
I asked Brecht on his opinion regarding GI and caustics,
Well, there’s not that many algorithms (that are actually useful):
Path tracing (+irradiance cache): can do everything except difficult light conditions and caustics
Bidir tracing with/without MLT: can do everything, especially difficult scenes, though can take a long time to render
Regular photon mapping: can do everything, but is constrained by memory, which in practice makes it not that great for detailed scenes/ difficult lighting conditions
Progressive photon mapping: is roughly on par with MLT, better at caustics but worse at difficult conditions
Point based GI: only diffuse-diffuse, and some very glossy specular, but great at detailed scenes
Light cuts: mostly good for diffuse-diffuse, though can also be extended to do some specular, no caustics as far as I know
With a bit more material system tweaking these should fit in quite well. I’m not a fan of lightcuts, it’s messy to integrate with arbitrary materials/lights, and the extra complexity doesn’t really buy you that much.
I’m not sure the best way to describe this… but better temporal consistency. Lots of times my animations have areas that flicker a little bit. Anti-aliasing helps but doesn’t eliminate it. It happens most often in small or distant geometry.
I am not sure if I can agree on this so much. Of course does each technique have pros and cons.
But I am not sure if just some material tweaks will do it.
I cannot proof that since I am not a coder and have no understanding of it.
But most major render systems seem to be successful with what they do.
I personally would be very surprised in a positive way when just some tweaks could elevate Blender.
I can see caustics being generated by just a photon lamp where needed and the rest being done with a different system.
But would it have the same look? The same smoothness and color definition?
Can it have it?
I am not sure if Blenders AA is just the best in that area.
I know that flicker issue way back from MentalRay and different AA approaches fixed it.
you might want to file a bug report if you can isolate an example that causes that.
I believe that the shading recode is all that needed if we have the base shading pipline done (bxdf based pipline) then the GI and other shading algos will come in due time, using spec/diffuse shaders is way too old and way too hard to get right compared to a bxdf pipline
as long as we have a modular shading basics and like 2.5 in general allow/cater for future development and not the can of worms that is the current renderer
This is more of a bug too… but if one material reflects another material with “only shadow” checked, then the "only shadow’ material appears black in the reflection.
That’s the only thing that annoys me D= other than that… GPU rendering?
in my opinion blender internal renderer should stay in non-physical correct fashion for all those neat artistic tricks it can produce and would be fast as possible for all that use in animations
but it also need some fast and working GI algorithm - it musn’t be fancy stuff like MLT, I think something approximate will be fine and get job done… in animations noone will spot it isn’t really correct but who really cares? (ok exclude those cgi geeks)
for all that fancy physicaly correct breathtaking godness we have luxrender which approaching with opencl and bledner integration at crazy pace - bravo
IMHO the most essential features:
On-Demand Geometry loading/unloading of geometry
Not sure how blender handles this today, but a number of renderers have implemented this for better memory efficiency, for renderman-like performance.
Loading geometry from disk when it’s needed and discarding it from memory when it’s not.
Stochastic 3d motion blur and depth of field
Not sure if this is even worth mentioning since you'll probably have to rewrite the whole thing from the ground up :) But it's a really essential feature nowdays.
There are two common routes to this, one is the REYES style arcitechture (Renderman), and the other one is raytraced. If the raytraced approach is to be taken, you should consider optimizing it from the start with this:
And it would be wise to read this thread and the interview:
This is the first time in the history of CG that a raytracer is threatening renderman for the throne as a new potential industry standard, Arnold. It's been in development for a long time and is used in-house at Sony Imageworks among others. It's a brute-froce raytracer, funny that it can compete performance-wise with REYES but it does.
Shading system rewrite, possibly support for OSL (Open Shading Language)
I know the shading system rewrite is already planned so I probably didn't need to mention this.
However, it might be worth considering OSL:
Export motion vectors compatible with other packages
This is quite important, to be able to export blenders Speed/Vector pass into other compositing apps, like Nuke, Shake, AE with Reelsmart Motion Blur.
I think it's possibly because Blender differs in it's XYZ layout from other apps, vector maps usually contain data in X and Y.
This should atleast be user controllable, i.e. that the user can specifiy what goes where. If X goes to R and Z goes to G.
Render-time displacement/subdivision/micropolygon tesselation
Brecht implemented something similar in the Render branch I hear? Not sure of the exact details of the current implementation though.
You get certain things for “free” with micropolygon tesselation such as level of detail (i X amount of triangles per pixel) and displacements.
Support for OFX plugins in the compositor
There are tons of good OFX plugins for all the pro compositing software out there, like Frischluft Lenscare and Reelsmart Motionblur, and The Foundrys offerings, like Keylight.
It’s an open standard so it shouldn’t be a conflict with the GPL?
The new OSS compositor Ramen has already added support for this
doesn’t it feel like there’s starting to pop up task specific renderers. i mean freestyle is way more complex for doing NPR renders than BI, and I like that. The integration looks good.
Lux and Pov and Yafa are all better on photo realism.
In general that’s something I think BI can work on, integration with 3rd party renderers to get into the workflow of BI. with that I mean mainly feedback alpha chans, indexob, render passes/layers to Compositing nodes after rendered.
The render in blender should also get maybe a more “general” feeling to it, with rulers (width/height) eyedropper maybe. (it has RGB values when you hover, but can you copy them?)
I dont even know if it’s possible but somehow make so 3rd party renderers resides in blenders render view, and you have their GUI inside blender. or blenders general GUI but the 3rd party render engine running underneath.
most renderers are able to be command line controlled, so it might work. I want less application jumping and easier maintanable workflow.
Maybe BI should check out if they could utilize LuxRays somehow. It’s open, and OpenCL accellerated … for blender internal raytracing.
Ah +10 loafmag
the ramen noodle guy wrapped blender focus node, into a OFX plugin on his blog
the new shader recode, bxdf, when it’s due isn’t it time to look at OSL and see if there any there that can be used/followed?
There are a lot of nice request here, but who is going to code them? Especially now when Brecht no longer works at the institute. It’s a time consuming task to maintain and develop a renderengine and I’m not sure that’s doable with the current workforce. I rather see more effort being spent on making external render engines to integrate as well as possible in blender…
Real GI and speed, speed, speed ,speed and speed!
I’m guessing it has something to do with LetterRips initiative, wich would possibly allow more paid developers, so it it’s actually possible to do now, if it becomes and actual business I guess.
Edit: I agree about the external engine focus though, rewriting/improving the render is a huge task, and all those resources could be spendt on improving other areas of Blender, and just focusing on a rock-solid render API. Perhaps assisting Whiterabbit on MOSAIC for renderman integration (or Matt Ebb on his upcoming exporter). Or, when Arnold is released commercially, focusing on a solid exporter to Arnold.
But either way, if it’s possible to get new paid developers it would be nice to modernize the internal render.
For my survey - a lot of responses wanted work done on ‘rendering’. Slightly more specific were mentions of ‘GI’.
Haven’t had time to go through the entire thing yet.
Even if I decide not to do it, I think the information still might be of value to the BI.
It’s nice to have a discussion about the BI renderer. Maybe the foundation should hire the Equinox 3D programmer. A fast and easy GI solution like in Equinox 3D would be enough to bring the BI to a higher level.
Better support for features lilke hair,that don’t support some effects and have some serious problem in quality terms(for example the specular on hair is completely random and buggy).
Also,for fur, if you can do surface diffuse(using normals from the surface) you should do the same for specular(using tangent(buggy) specular with surface diffuse it’s not good)
A shading variables expose node,where you can use some basic rendering variables in node setup( like L; V,and so on).
A matrix expose node,it can be useful for transformation between spaces.
As now doing shading force you to do extreme tricks and hacks,hard to maintain later.
And I agree with motion blur and depht of field.