Hi Devs. Is it planning some new features like “Rendering without Progressive ReFine”? I mean in Cycles in Default it Unchecked for Faster rendering than “Progressive ReFine” is Checked. All are great but in Lux hasn’t “Tiles” parametres as Cycles, V-Ray, Yafaray and so on. It is important when we rendering on CPU not on GPU. Maybe, it will cool if it will be implementing as Cycles! What do you thinks, guys?
Bidirectional rendering doesn’t work very well with tiled rendering due to light paths being traced from both the camera and lights. While the camera paths can be sent per pixel there’s no guessing where the light generated paths would end up.
Even Renderman doesn’t do bidirectional tile rendering (and that’s about as pro as it gets)
I have read before that some of the pro solutions like Renderman and Arnold actually have a bidirectional integrator, but it comes at the cost of being a bit more limited in terms of shading flexibility.
Fortunately, we have not hit the wall in terms of improving the sampling for unidirectional tracing (a lot of research taking place yet in areas like adaptive sampling).
This isn’t actually true. Render engines must load the entire scene and information in the background before rendertime since there is no way to tell where a light or eye path will go, following the random stochastic nature of path tracing of any type. Brian Savery from Radeon ProRender also states that the usual argument for memory savings in tiled rendering isn’t really that significant because of this fact. RenderMan does render in tiles with all engines. Take a look here https://vimeo.com/121924770 at about 5:25 the VCM integrator is used which is a bidirectional path tracer. Basically tiles give a smaller area to work with which has a slightly lower memory footprint and boosts performance somewhat. But whether you have a small tile, or your tile is the entire screen, the renderer still has to have access to the entire scene information first.
First of all, ray intersection tests are usually hierarchial, refining from a top-level bounding box to lower level bounding boxes down to primitives. There’s no hard reason you couldn’t defer loading of data at the lower level until you intersect its parent bounds. Luxrender (CPU) has such deferred loading capabilities, if I recall correctly.
Secondly, textures often are hierarchical as well (MIP-maps), so you can defer loading texture detail to the point where you know what sort of detail level you need (i.e. at the time of intersection).
Lastly, while it’s true that rays can go in “any” direction, their evaluation can be deferred as well, bundling up similar rays into workloads can improve cache coherence.
All of these features have a cost in terms of complexity and performance of course, especially on the GPU.
Well I wasn’t going to step into that complex of an argument, since that wasn’t really the point I was trying to make, just saying that there is more data there than just the bucket area, and that bidirectional path tracing is able to be tiled. Sorry if it was overly generalized.
It is true that if you take a look at main sources repository, it did not change since this summer.
But if you look closer, you can realize that luxblend last update was done only one month ago.
And on forum, last reply from main dev was 2 days ago.
On this thread people are talking about 1.7.
1.6 was released in may 2016 but in 1.6, there are two luxcore.
In 1.7, there should be only one. At least, the newest one should be useable for most uses.
So, luxrender is under a big simplification, clean-up rearrangement phase.
Nothing fancy to show. It takes time. But a simpler workflow and better performances at the end.