Has anyone seen or heard of this adaptive sampling paper?

The paper in question.

It’s from 2009, but the algorithm appears to be very robust for typical scenes (including scenes with caustics). What’s also important is that it doesn’t have the glaring weaknesses of algorithms that rely on contrast and color differences to determine where to place samples. That’s also not to mention it should work with regular, undirectional path tracing.

Also, there’s a Suzanne in one of the test scenes (and the algorithm they have cleans her up just fine).

Would this be practical for Cycles?

Maybe this should be posted on one of the devs channels or something. Kinda hard to know if someone of them have tried it.

I’m not sure how adaptive sampling works in Cycles, (still experimental?)
But luxrender and Renderman use something similar. Probably other renderers

I like this very much, but i think Brecht et al have seen it already and dismissed for some (good) reason

Just curious, what glaring weaknesses are you talking about? Afaik the algorithm exposed is also based on luminance and color contrast. What makes the difference is the ability to cast new samples based on local features instead of using a general thresold or rule for the whole frame, which is a shortcoming we have already identified in the current adaptive algorithm we use in YafaRay for instance. It is a very interesting paper, although framebuffers and any kind of RAM-stored pre computing structure is something that is frowned upon in some rendering quarters. More over when you are storing in SGRAM.

One thing to take into account when evaluating these papers is the fact that sampling algorithms are both camera and scene dependant. Your camera starst moving or lighting conditions change and then your algorithm starts showing scalabily and flexibility issues: frame buffers should be redrawn taking memory bus bandwidth, good lighting conditions could get by with a simple color thresold, more light sources means quadratic expansion of frame buffers, etc.

In the first page it says “any other error measure could be used as well”, maybe that is Ace’s point?

It seems like Dade from Luxrender team already implemented some part of this (as stated in the video description at https://www.youtube.com/watch?v=P_QmdpnKTW4) in 2014 already. So… nothing new under the sun.
Though, looking at the video I can’t figure out any “splitting”.
I understand that each tile gets subdivided in two smaller tiles and the algorithm checks again for sampling, termination or further splitting.

Btw, I found also this, same year, same level of coolness, etc… Problem is: all these papers uasually have same weak point that make them not suitable for implementation in a OS renderer as Cycles is. So drool only if some core dev says “Yes! I’ll do it!