Well, didn’t see this announced here yet and since I was a pretty big fan of Mitsuba back when it was released I thought I’d share a link to the released publication of Mitsuba 2.0
I don’t think it’s released yet but hopefully there will be a renewed interest to develop a plugin for Mitsuba afterwards.
Mitsuba always been more like an academic test platform than a renderer. It can do more than ANY other renderer could, but previous version was extremely slow and lacking of almost everything usually needed for production use
Mitsuba’s developers have no interest in it becoming fast or production-focused. It exists as a testing ground for academia as a stable bed for examining new methods of light transport.
Anyway, being it an “academic playground”, it can bring news in the render technology that maybe can land into Cycles one day. Who knows, let the professors play…
Thanks for sharing, @RealityFox. I also hope Mitsuba 2 will be accessible from Blender in the future. The first version was a great renderer with lots of methods to choose from.
Intriguing, but I guess most of the mentioned techniques are mainly interesting for scientific research purposes, not for taking common renderings to a new level in terms of realism and/or speed, although of course I hope some of the algorithms could benefit Cycles as well.
Yeah hopefully. Some of those next-gen research/academic techniques and algorithms may find their use in production pipelines. Afterall even pathtracing itself begun as an academic research. Or not?
At least for now, there was nothing in the video that talks about how image reconstruction for projector setups can be usable for general CGI and VFX. Does it help with caustics, does the user have to input the data manually ect…?
The same is true for what appears to be a demonstration of custom kernel code? Can this code be generated automatically for any scene that might be done in a DCC app or is it more specific, and what is the visible advantage over traditional engines?