Reduced render pixel sampling with interpolation

In the case of MagicLantern’s dual ISO feature a picture from a camera has alternate rows sampled and averaged against differently-exposed rows in between the former. There are various interpolation algorithms used in Photoshop and other image-manipulation software which basically work on averaging pixels between themselves, and some (even free) alternatives produce pretty awesome results.

The way I understand it raytracing works by tracing virtual rays from the camera into the scene for every pixel we wish to render. Wouldn’t it basically halve the time if only alternate rows were rendered and the result interpolated? Adapting such an algorithm to use an additional input like a vector, normal or AO to detect edges could potentially speed rendering up y a fair amount.

IMO The only way it could work would be if you use true random seeds and pure brute force sampling and walks in each exposure, which would be twice as expensive. Every ray tracer now is using shortcuts like next event estimation and controlled patterns for sampling distribution, apart from other tricks like russian roulettes, importance sampling etc, so at the end something like you are proposing would in fact produce more artifacts and noise. Two bad approximations don’t make for a better one than any of them, but just for the average.

Another concept is that different “exposures” don’t need the same amount of sampling, for instance there is always noise but you are not always able to see it, because it could be hidden in either bright or dark areas. The same for luminance contrast, a tonne mapping shift could reveal high contrast artifacts that need more sampling work. Sampling is a changing variable.

What you describe is basically how adaptive sampling works, but without the exposure shift. An adaptive sampling algorithm with exposure shift would be something interesting to see as long as the sampling rules keep changing not only in every render area but also in subsequent brighter passes, which is the core of the problem right now. BTW, the only thing that could indirectly “suggest” variation apart from luminance and chrominance contrast is ray segment lenght.