Luxrender at it again.

There is some information here: http://www.luxrender.net/wiki/SLG_Material_Settings

Thanks Dade. I’m no material expert, but that sure looks like a good start to me. Hopefully we can
get more builders to keep this exciting SLG project updated on Graphicall!

Check the weekly builds area on the Lux forums too, people tend to be better at updating it than they do with graphicall (for both SLG and big Lux):

http://www.luxrender.net/forum/viewforum.php?f=30

For those who want to test SLG3, here are some download links: http://zeealpal.com/downloads.html

I wonder if Lux developers are aware of any papers or algorithms to implement this idea: a quasi bidir unidirectional pathtracer that improves accuracy by doing a lower resolution prepass of light rays, with the prepass generating a map to control adaptive sampling of the eye rays.

It seems to me that such a prepass at 1/4 resolution or even less, would take a few seconds to render (blurring could be used quite heavily since only a map is being generated, thereby reducing number of samples required). The advange would be that the map would direct adaptive samples of the unidirectional pathtracer, improving caustics and lighting in occluded areas.

Is this feasible? Perhaps it’s already been worked on, I’m not completely up to date on Lux.

Using a photon map from a light ray prepass to guide final gathering is implemented in pbrt’s exphotonmap integrator and I believe luxrender too.

There was also a paper about path tracing driven by photon map (Matt Pharr, one of the author of PBRT, was probably also one of the author of the paper). It is something I have tried in the past but the cost of looking into the photon map at each path vertex was higher than the produced benefit in term of noise reduction.

IIRC, it’s still in Lux. If you select the exphotonmap itegrator, and change it’s mode from direct lighting+final gather, over to “path”, you get this.

Hmm, my ignorance is against me, so correct me if I’m wrong… But it seems we’re not talking about the same thing. The photon map, as far as I understand, is itself used to represent the radiance contribution of the lights, ie the photons around the ray hit are analysed for this information. Which more often than not can lead to artifacts typical of photon mapping.

My idea of a light path prepass was that the output image would only serve as a controlling map for adaptive sampling of the unidir path tracer - the way the path tracer works is not altered, no photons are analysed. It’s just that more paths are sent in directions that would otherwise be noisier than without the adaptive sampling.

I might be posing the wrong question. Maybe a light path prepass would in practice not be sufficiently differentiated from the normal eye path tracing to make much difference. In that case, how about an occlusion prepass, mapping areas in the scene that are not directly exposed to lights and are therefore going to be noisier in a unidir path tracing output?

For about 90% of production use I find unidirectional path tracing is the fastest and most reliably accurate option, and easiest to work with. The big problem (leaving caustics aside for the moment) is that in otherwise perfectly smooth renders, there is usually an occluded area that proves nearly impossible to clear of noise. This is where an adaptive sampling map should help. Rather than the user painting the map, surely an occlusion map could easily be generated? With more rays directed in mapped areas, the rays would be distributed more efficiently and the stubborn areas would clear.

Where am I going wrong?

So you just want to sample pixels more often if that pixel can “see” more photons? If so we already got that, except not using photon mapping to control the sampling distribution. Not sure using photon mapping would be beneficial.

Yes, it seems that thread points to the right solution. Of all the usage reports, LD path tracing with user driven map offered the most obvious and least problematic advantage.

It can’t be that big of a step now to offer automatic generation of the map, an occlusion prepass. You could get started with nothing more than a 10 sample render of the scene, blur it for smoothness, invert it, and there’s your map.

But thats assuming that you want extra sampling in dark areas, what about instances of glass? Or reflections… They might not be dark,
but still require more samples to clear.

Also with areas that contain caustics, you might need a lot more samples for caustics in areas that initially appear bright because of the fact they are easily accessible through paths that are not specular in nature.

The thing is, brightness is but one parameter that must be taken into account when formulating a sampling map, there are a number of other variables too to do things like anticipating which areas will just be noisy by nature and areas that will need updates when caustics start showing.

Of course, it’s all a matter of development, I merely presented a useful starting point. An occlusion map - perhaps a clay render with one or two bounces - would be better than nothing.

I’m actually very optimistic this feature will come. The method has been proven to work, it’s just that the user must create the map manually at the moment. One more step and we’re there. :slight_smile:

What about rendering to, say, 10 spp, recording all the individual samples, and basing the sample map on the discrepancy of the sampled values? Then you could blur the map to avoid noise, the more discrepancy the more sample weight it spills to nearby pixels. Maybe with some outlier rejection to avoid single fireflies affecting the map too much. Granted, this would require enough ram to store the render target 10 times for a full resolution map, but I guess this pre-pass could be done at half res since the map is getting blurred anyway. This is different than the noise aware sampling you already have because that measures noise with respect to neighboring pixels, rather than each pixel being measured against its own other samples. It would work correctly with different material types and noise in the texture too, and since it’s just a sampling map, it wouldn’t affect the unbiased nature of the rendering either.

What you’re describing is adaptive sampling, which is a well-established (and biased) technique.
http://graphics.ucsd.edu/~henrik/papers/adaptive_sampling/

Hmm. I’m no expert, but how is sampling some pixels more than others biased? As long as all the pixels are sampled at least some of the time, given enough samples it should still arrive at the same result.

A renderer that converges to the correct result is consistent, but not necessarily unbiased:

Adaptive sampling is biased because the paths that it samples are not independent but depend on the result of previous paths.

The term “unbiased” has been misused a lot in marketing departments over the last few years, and in fact, most renderers that are advertised as unbiased are in fact biased (for example by adaptive sampling, a max path length or terminating paths once their contribution falls under a certain threshold).

Note that “unbiased” does also not mean correct or “biased” means wrong. If you look at the images at the top of this paper, it is very obvious that the biased algorithms deliver an arguably more correct result than the unbiased ones: http://cs.au.dk/~toshiya/ppm.pdf

This is true. However there are different adaptive sampling schemes, some are biased, some are not. And so on. Also, some forms of bias are very impractical to get around. For example, having a hardcoded maxdepth is required, but adds bias. However by setting the max depth high enough the bias can be made negligible. Same for using floating point numbers instead of real numbers.

For me, the main feature of an “unbiased” renderer is that by rendering twice and averaging, you get a better result. This allows for stopping and resuming, and also trivial performance scaling through independent rendering processes.

A renderer does not need to be unbiased for that. QMC is an excellent consistent method for achieving that and Mr. Keller’s publications do a great job of explaining how to implement it:

Also, if you write your renderer without recursion, you can make it trace an infinite number of bounces, but at some point the contribution from extra bounces with get lost in the limited precision of IEEE floating points.