When I look at this, I think, okay, this could have the potential to be disruptive technology by making it much easier to just paint on the image and have it match the lighting and shadowing (and possibly making the GIMP even more irrelevant).
View the nine minute video on their examples page to see a slideshow of photos reverse-engineered and/or edited via the technology. I’d say that for an initial release they did a pretty good job, even though some of their extracted color images (especially areas with really dark regions), still show some hints at shading (though the shading information they extracted looks pretty good).
I wouldn’t deride them for not having it all the way there for various cases though, because if you’re trying to do it better than with say, a high-pass and/or an equalize filter, it can be a tough job to perfect. I’ve even tried to do the same thing for color using a group node made in Genetica to extract color from images (though not doing the same to get the shading data).
If you’re wondering, here was my attempt to do the same thing in established software, I’m sure Tandent has been able to do it better and much faster because they built an app. from the ground up using actual compiler code and actually have scientific knowledge about light they can apply.
About the app. itself though, do you think this can indeed become disruptive technology, or will it be drowned out when we start seeing plugins for other apps. doing this?