As a curiosity, what do you think the advantages/disadvantages of calculating the full light map for an entire scene would be. Not necessarily in Blender, but for any program. The way light paths are calculated in Blender is “backwards” to the real-world. Rays are transmitted out from the camera and bounce around to find a light source. But what if someone took the game-engine model to an extreme. Instead of outputting the general illumination of an object, what if the program sent out trillions of light rays from a source, in a “forward” manner, and exported it to a file. Not that anyone would bother doing this until quantum-computing-like speed could be achieved in animation.
The one big advantage would be that you would be able to position the camera in a rendered scene in real-time, so you could move the camera anywhere you wanted to and within a second or so, see how it looks at final quality.
I would compare it to a live-action scene. Nature handles all the light bounces, and the scene progresses as the actors act it out. You can’t directly change what the actor does without CGI or reshooting the take. Similar to the process at-hand, the scene is set, you couldn’t alter how it plays out without recalculating or adding CGI (to your CGI).
The disadvantages are time and space. It would take millions of CPU-hours to get the final result, and probably take terabytes, if not petabytes of space. That would make it impossible for the hobbyist, but only inconvenient for big studios within a few years.
It might be more feasible to, instead of doing it to the final render quality (2000-cycles-samples-equivalent), one might only do it to 50. This would give the cameraman a general idea of what his scene looks like. Then, when he is happy with how the camera moves, send it back to the farm for the traditional render.
This is just hypothetical but I’d be curious to see what you guys can think of as far as pros/cons.