thank you for the elaborate reply.
Let me react to some of your points:
1, I definitely agree that a good integration with a host software is a must. Having a separate studio app just for a renderer is inconvenience for many because as soon as you need to change some of your scene topology-wise, you need to do a round trip back into an asset creation package, do the changes, and pray that the changes done won’t break any setup already done in the rendering app when re-importing.
Furthermore, it’s not only crucial to have some integration, but for that integration to be very tight. That’s one of the pitfalls of Blender; plain exporters don’t allow for deeper integration such as interactive rendering sessions, but since Appleseed is open source, it should not be much of an issue. Interactive rendering is becoming a standard these days.
2, I tried Appleseed some months ago but dropped it quickly, because despite almost a decade of experience with various renderers, like Mental Ray, V-Ray, Arnold, Cycles, Clarisse and Corona (on which I’ve done a majority of the UI design and feature functionality design), I was not able to get anywhere in the Appleseed studio app. Even such task as establishing image based lighting just proved to be too much for me. I will give it a shot again when I have some time, and I will try to write down all the obstacles I encounter through the eyes of a new user.
3, I have actually very closely witnessed the failure to adapt and subsequent death of Mental Ray, so I understand well what you mean here
4, There have been already quite a few renderers that experimented with MLT, BDPT, SPPM and VCM, including V-Ray and Corona, but they all ended up with good old unidirectional, non spectral path tracing for several reasons:
A, Unidirectional path tracing with GI caching for secondary bounces (irradiance/light cache) is still vastly superior in terms of performance. The old worries of flickering are pretty much resolved these days with smart retracing of the secondary GI cache, ray clamping and roughening of specular reflections for cached GI paths.
B, Bidirectional methods tend to require bidirectional shader coherency, which has proved to be way too limiting for the shader flexibility required by the majority of users.
C, Users rely on accurate caustics, or for that matter even accurate light transport a lot less than programmers tend to think. Even if you clamp ray values to as low value as 5, most people will not even tend to notice. The reason CG programmers tend to think that GI accuracy matters to users is mostly because of general dislike of GI methods which lost a lot of detail by caching also primary bounces. But these are thing of the past.
As long as there are no splotches, blurry shadows, missing contact shadows or way too significant loss of light energy, people generally don’t tend to care as long as there is some light bounce, color bleeding and defined indirect shadows. This opens a huge opportunities for performance optimizations.
D, For spectral rendering to be truly of some benefits, it generally requires input to be spectral as well. Vast majority of users input bunch of JPEG textures stored around their hard drive. This will rarely make spectral rendering output any visually better result, but it will probably still harm the performance. Spectral effect such as glass dispersion can be done even in non spectral rendering modes, just by branching rays to different wavelength colors on a hit of a refractive surface that has dispersion enabled. Making entire renderer spectral because of it did not pay off to anyone so far, as far as I know.
E, Advanced light transport methods will definitely help in some rare case, for example imagine a scene lit entirely by a sun light reflected from a glass windowed building across the street. However, users already learned to not set up their scenes in these ways, or fake these effects. In the end, benefits of these methods are often still vastly negated by their drawbacks.
4, AFAIK, reference to render passes as “AOVs” was originally introduced in Arnold and then adapted by others. If you want to simplify Appleseed, this kind of names is exactly the place to start. A new users unfamiliar with rendering will hardly know what AOV means, and even if he knew that the acronym stands for “Arbitrary Output Variables”, he’d probably assume it means some programming term (due to the word “Variables” present), and never associate it with an image output. Calling it something like render passes, render layers or render buffers is significantly more appropriate.
If an experienced Arnold user comes to Applessed, he will know what AOVs means, but if he encounters a name like “Render Layers”, I am sure a kind of person who knows what AOVs mean will be smart enough to translate the term
5, I absolutely understand that it’s impossible for spare time Open Source project to compete with commercial renderers, which are being developed full time by large teams of people. My main point was that Corona and V-Ray are generally a lot better reference of where a new render engine should be heading in terms of usability than Renderman or Arnold, which, while employing modern rendering methods still in some aspects rely on 10-20 years old workflows.