I fully agree.Once you experienced the power of AOV’s they become central to the rendering pipeline. To me this is not primarily about GPU usage but about the power of saving data and deferring as much computation as possible to a stage where one is able to influence it in near realtime. At this stage, IMHO, a few shortcuts can and should be taken to put the emphasis on speed. But something I see missing in current solutions outside of studio pipelines is being able to feed back the tweaked parameters to the renderer to provide for optimally sampled, artifact free images. For example, multiplications outside of the renderer are prone to cause artifacting in combination with transparency. Or the well known “anti-aliased Z pass” problem.
For reference, here is the post and the thread that got me started fantasizing about this a year ago. Back than I wouldn’t have thought this possible outside a specialized studio pipeline. But having witnessed the power and usability that Blenders integrated compositor brings, combined with the pace of its development I see it as possible.
Can’t wait for the “Blender Shading Language”!