Precompute Scene Light Map

As a curiosity, what do you think the advantages/disadvantages of calculating the full light map for an entire scene would be. Not necessarily in Blender, but for any program. The way light paths are calculated in Blender is “backwards” to the real-world. Rays are transmitted out from the camera and bounce around to find a light source. But what if someone took the game-engine model to an extreme. Instead of outputting the general illumination of an object, what if the program sent out trillions of light rays from a source, in a “forward” manner, and exported it to a file. Not that anyone would bother doing this until quantum-computing-like speed could be achieved in animation.

The one big advantage would be that you would be able to position the camera in a rendered scene in real-time, so you could move the camera anywhere you wanted to and within a second or so, see how it looks at final quality.

I would compare it to a live-action scene. Nature handles all the light bounces, and the scene progresses as the actors act it out. You can’t directly change what the actor does without CGI or reshooting the take. Similar to the process at-hand, the scene is set, you couldn’t alter how it plays out without recalculating or adding CGI (to your CGI).

The disadvantages are time and space. It would take millions of CPU-hours to get the final result, and probably take terabytes, if not petabytes of space. That would make it impossible for the hobbyist, but only inconvenient for big studios within a few years.

It might be more feasible to, instead of doing it to the final render quality (2000-cycles-samples-equivalent), one might only do it to 50. This would give the cameraman a general idea of what his scene looks like. Then, when he is happy with how the camera moves, send it back to the farm for the traditional render.

This is just hypothetical but I’d be curious to see what you guys can think of as far as pros/cons.

Traditionally, lightmaps only store diffuse (that is, view-independent) illumination. If you’d store all the light from all directions, you’d have what is commonly referred to as a light field. As you imagined, the storage cost (and potentially computation time) for this is enourmous.

In the long run, most forms of pre-computation are likely to become obsolete, because memory access time improves at a far slower rate than processing power.

Also, not all rendering methods trace rays only backwards. Bidirectional Path Tracing traces rays also from the light sources and connects those paths to the paths originating from the camera, and the camera sensor itself. Photon Mapping shoots rays into the scene and stores the intersections in a cache, which are then sampled similarly to light sources.