In the case of MagicLantern’s dual ISO feature a picture from a camera has alternate rows sampled and averaged against differently-exposed rows in between the former. There are various interpolation algorithms used in Photoshop and other image-manipulation software which basically work on averaging pixels between themselves, and some (even free) alternatives produce pretty awesome results.
The way I understand it raytracing works by tracing virtual rays from the camera into the scene for every pixel we wish to render. Wouldn’t it basically halve the time if only alternate rows were rendered and the result interpolated? Adapting such an algorithm to use an additional input like a vector, normal or AO to detect edges could potentially speed rendering up y a fair amount.