Have a look at the video on that page, it’s pretty frickin’ awesome.
Given a low quality video of a static scene (top row) and a few high quality photographs of the scene, our system can automatically produce a variety of video enhancements (bottom row). Enhancements include the transfer of photographic qualities such as high resolution, high dynamic range, and better exposure from photographs to video. The video can also be edited in a variety of ways (e.g., object touchup, object removal) by simply editing a few photographs or video frames.
This would be great in Blender. The good news is that it’s open source, although still in the research stages. Could this be a way to get camera tracking into Blender? It only works on static scenes at the moment (i.e. a moving camera without moving objects in the scene), but they’re working on that. Anyway, what say you all?