appleseed 1.9 beta preview

Hi all,
We’re nearing another release of the appleseed renderer and we’d really love to get some help in testing out all the new features that have been added to blenderseed in the last few months, including:

OSL shading
Object visibility flags
Scene file export for later rendering (including animation)
Denoising
Render Stamps
AOVs

and more…

https://readthedocs.org/projects/appleseed-blenderseed/

Any issues can be reported here: https://github.com/appleseedhq/blenderseed/issues

And finally if you have any questions, suggestions or just want to chat with the appleseed devs and users, join us on Discord at https://discord.gg/Vcu5A7h

1 Like

Strange question regarding OSL.

Blender has an OSL flag, which turns ALL cycle nodes into “same output” osl scripts with proper connections between nodes (input->output tree)

Does it mean, that cycle node tree (full material) can be transferred into appleseed rendering “as is” in form of huge OSL script? or a bunch of osl scripts with connections

Theoretically… is this possible?

Appleseed would be able to use some of the blender OSL files, but not directly (as it has no access to Cycles materials). You’d have to copy them to the blenderseed shader folder in the appleseed package. Most of the utility shaders should work (nothing with closures), but the blender OSL nodes are missing a lot of the metadata we use to dynamically create our nodes so they’ll probably look pretty weird.

Cycles OSL basically uses the same hardcoded nodes in both modes where we dynamically create the nodes at startup

Latest preview:

I am a bit curious what the purpose of appleseed is.

It seems to be built around the old Reyes renderman and Mental Ray expectations of what a renderer should be. And such renderers, with no focus on usability and simplified PBR workflows, and high focus on technical complexity are already dying out. I am not really sure that end of second decade of 21st century is a right place in time for such a renderer to be nearing release.

Reyes and Mental Ray both were not Path Tracers, Appleseed is (while also offering other Methods of course). Appleseed supports PBR workflows with the Disney BSDF, Denoising will come, OSL Support, Light Tree Sampling… They even have Random Walk SSS (before Cycles had it) So I don’t really know what you mean by that.

I mean focus on usability over technical complexity. Just a week ago or so, someone posted a video here where 3Delight CTO was contemplating that they have almost no users these days because they missed the train of the trends which are shifting towards usability.

@rawalanche I think you haven’t tried Appleseed recently. Appleseed is not much different in usability to Cycles in Blender and delivers great image quality. It has some things Cycles does not provide e.g. SPPM for complex lighting scenarios, OSL support without incurring the speed penalty, spectral rendering and native plugins for Maya & 3ds MAX.
The goal is having a full open source alternative to vfx production render engines like Arnold suitable for small studios or freelancers.

I did actually. Appleseed is not a renderer you just pick up and use. It’s a renderer to struggle with even if you have the documentation open right on the second monitor. The experience is incomparable to for example Corona Renderer, which is a renderer so simple you can pick it up and use even without a manual. And that’s where all the renderers are heading.

For example, you should not need to learn a set of features named with arbitrary cryptic acronyms which work in tandem just to get something as essential as image based lighting going on.

1 Like

I completely agree that Corona is a good example for ease of use and artist-friendliness is important. Good documentation is an issue with most open source projects. It’s boring work compared to implementing new features and therefore not easy to find unpaid contributors.
I use appleseed since about 2 months and didn’t had that much troubles. Granted as an amateur I didn’t had the demands and strictures of a professional project to fulfill so I wont opine about suitability in a professional setting.
If you use Corona then most probably together with Max. For appleseed with Max you can load a HDR via a Max Bitmap into the Environment Map slot and use it to have basic image based lighting.

Can you be more specific on this? Our aim is to make appleseed as user friendly as possible so we definitely need to hear from users.

Although to be fair even at its worst appleseed is far easier to use than REYES or Mental Ray.

The preview build above does have OSL and denoising in it, BTW.

Hi @rawalanche, appleseed founder here.

Thanks for your comments.

You’re right that appleseed is still using too much jargon. It makes picking it up more painful than necessary and it’s the source of much confusion. I believe the reason is that appleseed has historically been impossibly hard to use because there was no good plugins, or in most cases no plugins at all. The few users were thus either technical artists from the team, or technically-oriented users with a strong tolerance to jargon and a lot of patience, or artists closely in touch with the development team (via Slack, at the time) and thus able to ask for clarification in real time, as they went.

Things have improved a lot since the early days. For the past couple years or so, we’ve been putting a ton of efforts in writing high quality plugins for Blender, Maya and 3ds Max. They are far from perfect but they allow regular users to pick up appleseed and start exploring how it works and what it offers, and appleseed’s flaws now come into light.

We are actively improving the situation by simplifying workflows, replacing technical terms (“Mean Free Path”) with more intuitive ones (“SSS Depth”), etc. It’s an ongoing process and we still have a ton of work to do, but things are improving daily. User feedback is an essential step in this process as it’s the starting point for most of our efforts on usability, so we highly appreciate it.

I smiled when you compared appleseed to mental ray, here’s why: I was a rendering R&D engineer on mental ray. The core tech was good but like appleseed it was shaped not by regular users but by the very specific needs of TDs at high end studios such as BUF (Fight Club, Matrix, etc.). It also had a lot of historical baggage and it was difficult to improve it and grow it in new directions. I believe this is one of the reason why it failed to lead the switch (or even really follow the switch) from rasterization to path tracing, for example.

appleseed is a lot closer to Cycles or Arnold than it is to mental ray. At its core it’s a state-of-the-art, physically-based path tracer. It implements the modern workflow expected by artist: single pass rendering, progressive rendering, interactive rendering, etc. PBR is absolutely its bread-and-butter.

appleseed provides an interesting set of features. It’s pretty much the only renderer that I know that can work in RGB mode or pure spectral mode. It can do unbiased rendering but it offers many settings and switches to progressively and selectively reduce noise and render times at the cost of bias and correctness. It offers fully programmable shading (via OSL), fast and high quality motion blur, state-of-the-art reflection models (Disney BRDF, GGX microfacet model, physically-based metal, plastic and glass, etc.), state-of-the-art subsurface scattering (Normalized Diffusion or Random Walk), AOVs, denoising…

appleseed also implements advanced light transport methods such as SPPM (i.e. modern photon mapping). We are working on BDPT (Bidirectional Path Tracing) and will follow up with VCM (Vertex Connection and Merging). Sorry for all the jargon, but in a nutshell these light transport methods will allow to render scenes that are difficult or impossible to render with plain unidirectional path tracers such as Arnold or Cycles.

This unique feature set allows to use appleseed on very different projects. For instance, we’re working on a new animated short film with a unique visual style (our first one was Fetch: https://vimeo.com/92172277); at the same time a major company is currently using appleseed for simulation and visualization in a CAD context. You can see a glimpse of visualization features offered by appleseed in this video: https://vimeo.com/263532331.

As I’ve said earlier, we still have a ton of work to do to make appleseed easier to pick up and easier to use. We’re obviously nowhere near what Corona offers, but we are determined to push in the direction of usability and productivity. appleseed is a fully open source project developed by volunteers in their free time. This means that we have limited development resources, and that sometimes the interests of users and developers are not quite aligned. We have to live with this.

Our goal has always been to offer state-of-the-art rendering technology in a fully open source form. Despite all its failings, I believe that appleseed is accomplishing exactly that.

1 Like

(Removed duplicate post.)

(Removed duplicate post.)

(Removed duplicate post.)

(Removed duplicate post.)

(Removed duplicate post.)

@Franz:

Hi,

thank you for the elaborate reply.

Let me react to some of your points:

1, I definitely agree that a good integration with a host software is a must. Having a separate studio app just for a renderer is inconvenience for many because as soon as you need to change some of your scene topology-wise, you need to do a round trip back into an asset creation package, do the changes, and pray that the changes done won’t break any setup already done in the rendering app when re-importing.

Furthermore, it’s not only crucial to have some integration, but for that integration to be very tight. That’s one of the pitfalls of Blender; plain exporters don’t allow for deeper integration such as interactive rendering sessions, but since Appleseed is open source, it should not be much of an issue. Interactive rendering is becoming a standard these days.

2, I tried Appleseed some months ago but dropped it quickly, because despite almost a decade of experience with various renderers, like Mental Ray, V-Ray, Arnold, Cycles, Clarisse and Corona (on which I’ve done a majority of the UI design and feature functionality design), I was not able to get anywhere in the Appleseed studio app. Even such task as establishing image based lighting just proved to be too much for me. I will give it a shot again when I have some time, and I will try to write down all the obstacles I encounter through the eyes of a new user.

3, I have actually very closely witnessed the failure to adapt and subsequent death of Mental Ray, so I understand well what you mean here :slight_smile:

4, There have been already quite a few renderers that experimented with MLT, BDPT, SPPM and VCM, including V-Ray and Corona, but they all ended up with good old unidirectional, non spectral path tracing for several reasons:

A, Unidirectional path tracing with GI caching for secondary bounces (irradiance/light cache) is still vastly superior in terms of performance. The old worries of flickering are pretty much resolved these days with smart retracing of the secondary GI cache, ray clamping and roughening of specular reflections for cached GI paths.

B, Bidirectional methods tend to require bidirectional shader coherency, which has proved to be way too limiting for the shader flexibility required by the majority of users.

C, Users rely on accurate caustics, or for that matter even accurate light transport a lot less than programmers tend to think. Even if you clamp ray values to as low value as 5, most people will not even tend to notice. The reason CG programmers tend to think that GI accuracy matters to users is mostly because of general dislike of GI methods which lost a lot of detail by caching also primary bounces. But these are thing of the past.

As long as there are no splotches, blurry shadows, missing contact shadows or way too significant loss of light energy, people generally don’t tend to care as long as there is some light bounce, color bleeding and defined indirect shadows. This opens a huge opportunities for performance optimizations.

D, For spectral rendering to be truly of some benefits, it generally requires input to be spectral as well. Vast majority of users input bunch of JPEG textures stored around their hard drive. This will rarely make spectral rendering output any visually better result, but it will probably still harm the performance. Spectral effect such as glass dispersion can be done even in non spectral rendering modes, just by branching rays to different wavelength colors on a hit of a refractive surface that has dispersion enabled. Making entire renderer spectral because of it did not pay off to anyone so far, as far as I know.

E, Advanced light transport methods will definitely help in some rare case, for example imagine a scene lit entirely by a sun light reflected from a glass windowed building across the street. However, users already learned to not set up their scenes in these ways, or fake these effects. In the end, benefits of these methods are often still vastly negated by their drawbacks.

4, AFAIK, reference to render passes as “AOVs” was originally introduced in Arnold and then adapted by others. If you want to simplify Appleseed, this kind of names is exactly the place to start. A new users unfamiliar with rendering will hardly know what AOV means, and even if he knew that the acronym stands for “Arbitrary Output Variables”, he’d probably assume it means some programming term (due to the word “Variables” present), and never associate it with an image output. Calling it something like render passes, render layers or render buffers is significantly more appropriate.

If an experienced Arnold user comes to Applessed, he will know what AOVs means, but if he encounters a name like “Render Layers”, I am sure a kind of person who knows what AOVs mean will be smart enough to translate the term :slight_smile:

5, I absolutely understand that it’s impossible for spare time Open Source project to compete with commercial renderers, which are being developed full time by large teams of people. My main point was that Corona and V-Ray are generally a lot better reference of where a new render engine should be heading in terms of usability than Renderman or Arnold, which, while employing modern rendering methods still in some aspects rely on 10-20 years old workflows.

1 Like

Thanks for the great feedback.

Answering your points, following the same numbering:

  1. appleseed.studio is a tool to inspect, debug, tweak and render scenes. It shows and allows to manipulate the internal in-memory scene representation used by the renderer. It’s not meant for lookdev or scene assembly, which are what DCC plugins are there for. Maybe one day appleseed.studio will grow to more than a debugging tool, but for now it’s just that. Clearly we need to better communicate on this. Note that interactive rendering is already supported (with limitations) by our 3ds Max and Maya plugins. Supporting interactive rendering Blender is definitely on our roadmap.

  2. If you used appleseed.studio, I can understand your frustration. It’s a nightmare to use for scene construction as you need to create all the individual entities and connect them together. It doesn’t even allow moving objects or lights, nor does it allow renaming entities or does it have undo/redo. Again, it’s really not meant for this. If you give appleseed another try, you should use one of the plugins. The 3ds Max plugin is the most refined and easier to pick up, followed by the Maya and Blender ones.

  3. I don’t believe MLT is a useful technique in the context of appleseed, but BDPT in particular is important for some of the visualization projects appleseed is used on. SPPM is just a stepping stone toward VCM, which itself is an interesting “last resort” technique for tricky lighting situations. However I agree that a good, well tweaked well refined unidirectional path tracer is sufficient in most cases. Regarding spectral rendering: it’s pretty much useless for anything related to VFX and animation, but it’s again a very important topic for the visualization projects appleseed is used on. Profiles have shown us that it doesn’t cost appleseed much performance to support both RGB and spectral rendering side-by-side due to how it’s implemented in appleseed. Animation/VFX people stick to RGB and don’t pay for the spectral support, so that’s fine.

4.A. I’m not too fond of irradiance/light caches. I’m much happier about introducing caching-free techniques that allow to gradually introduce bias in order to speed up renders. Max Ray Intensity (to kill fireflies) and Roughness Clamping are two effective techniques but there are others. We already have the former and are working on the latter.

4.B. The majority of our shaders work fine in the context of bidirectional techniques. There are a few that don’t, and if you choose to use them you forfeit your rights to use bidirectional techniques.

4.C. Completely agree.

4.D. Agree here too. It makes very little sense to use spectral rendering with strictly RGB inputs. However, as I said above, some people we’re working with are more than happy to provide us with spectral representation of their materials in order to accurately model how light is being reflected in their systems, going as far as requesting that we increase the range of simulated wavelengths to cover wavelengths outside the visible spectrum.

4.E. Agree.

4 (bis?). In fact the term AOV (“Arbitrary Output Variable”) originates from RenderMan (or PRMan as it was called at the time), not from Arnold. I believe that it’s universally understood by TDs in VFX/animation studios, but you’re completely right that in the context of Blender, the term Render Passes would be a lot more appropriate. We’re discussing this with the team on Discord (feel free to join if you’re interested: https://discordapp.com/invite/Vcu5A7h) but the consensus seems to be that Render Passes makes more sense for blenderseed. In 3ds Max we’ve actually called them Render Elements, as per the local nomenclature.

  1. I partially agree. Corona and V-Ray are good examples to follow, although I personally find that V-Ray carries a lot of legacy complexity, and that many of the material parameters or rendering settings have unclear behaviors. I’m not an artist so I’m not sure I would trust myself there. I’m also not exactly clear in what way Arnold and RenderMan are still relying on 10-20 years workflows. Could you elaborate please?

Nice work jdent02. Can’t wait to try it out!