Just curious, what glaring weaknesses are you talking about? Afaik the algorithm exposed is also based on luminance and color contrast. What makes the difference is the ability to cast new samples based on local features instead of using a general thresold or rule for the whole frame, which is a shortcoming we have already identified in the current adaptive algorithm we use in YafaRay for instance. It is a very interesting paper, although framebuffers and any kind of RAM-stored pre computing structure is something that is frowned upon in some rendering quarters. More over when you are storing in SGRAM.
One thing to take into account when evaluating these papers is the fact that sampling algorithms are both camera and scene dependant. Your camera starst moving or lighting conditions change and then your algorithm starts showing scalabily and flexibility issues: frame buffers should be redrawn taking memory bus bandwidth, good lighting conditions could get by with a simple color thresold, more light sources means quadratic expansion of frame buffers, etc.