New open source unbiased renderer

dsavi,

the README says you need to run setpath.sh to set the library path manually:

Getting started:
On Linux/OSX, the library path must be manually set: on the command line,
this can be done using the script “setpath.sh”:

    $ . setpath.sh

Please try again, I’m eager to know how it works. :slight_smile:

Also, would you tell me you Linux distro and version and the version of your graphics drivers, please? I’m on Ubuntu Karmic with the official 185.18.36 Nvidia drivers. Those also provide gl.h and glext.h, which I suspect are too old. I’m getting compile errors like:

include/mitsuba/hw/glgeometry.h:39: error: ‘GLuint64’ does not name a type

Thank you.

I see your point LoafMag, but I’ve also seen a lot of renderers around just to have cubes and spheres rendered out. Of course is not my business if a dev. want to spend his time on a new renderer. I’d just prefer to see more efforts towards another direction, where Blender suffers more (exporters / importers, finished particles, more procedural animation tools - today we cannot even animate the parameter in the shelf tool - and so on…)

phoenixart,

why do you think you could tell anybody what they do with their free time? It’s not up to anybody except the coder to judge if any energy is wasted or not, and providing you with the “perfect and fully integrated” render solution for Blender just might not be their main motivation.

/me shakes head

Dude, too much negativity…

And who’s this ‘we’ you claim to speak for anyway?

Wow guys, calm down.
Sanne, have you read my post above? please, do that.

Though I was part of a community where tons of time I read about what we’re missing. Was I wrong?

On my laptop this renderer give me an error when I try to run the *gui.exe . Luxrender runs and others.

You assume that the people ‘wasting their time’ on this even have the knowledge to implement a modifiable construction history in blender…or have even heard of blender to begin with since it seems to have nothing at all to do with blender.

It would be nice if we (the Open Source Commissars) could outright declare all non-blender coding tasks a ‘waste of time’ and force these people (with no relation to the blender project and in their spare time) to fix the deficiencies in blender before they can use their leisure time as they see fit.

First thanks a lot for that, I had just skimmed through the README file.

I’m on Lucid 32 with the 195.36.15 drivers (I’m sure there’s a newer version, but I can’t upgrade them right now). Now that I look, 256.44 is the latest version. o_O I never have understood nVidia’s version number system.

Rebuilding it now, I’ll see if this works.

Edit: Still doesn’t work :frowning: Same error.

Edit #2: Success! I’ll post some screenshots later.

E: There’s nothing much to see, I rendered the cornell box at 2k x 2k, on my dual core (Anthlon X2 64 5000+) it rendered in 21m 48 seconds with a fair amount of noise (But I didn’t know that the noise can be reduced), the noise was actually nothing compared to what Luxrender would have come up with in that amount of time.
Mitsuba (Which is apparently some kind of plant) reads an xml file that describes the render settings, including links to .obj files containing the models, and other files containing textures (If any). It doesn’t seem to me that an exporter would be hard to write, however my python skills are… Lacking. :stuck_out_tongue:

no, no, not at all, it’s the ‘new renderer’ part, I assure you. :yes:

but back on topic, a more important question to me than anything else is does it do multi pass rendering/ render passes? :eyebrowlift2:

The site says it uses Collada so should be plug and play.

Not that I know what the state of the collada exporter is though…

And even more important: how fast is the biased renderer? :smiley:

(can’t run the windows version here, it crashes after loading one of the sample files)

No, I don’t assume that even if your story is funnier than mine.
Strange how sometimes one’s opinion sounds like dictator impositions in the open source world, that’s a shame: this habit makes hard or impossible to communicate.

Exactly what I was talking about, nothing more nothing less. I’m trying all the latest SVN and still couldn’t use the Collada exporter successfully.
Anyway, wasn’t my intention to go OT or hijack this thread sorry if sounded like that.

It isn’t really the message but how you communicate it.

All I hear is “blah, blah, blah…you’re wasting your time because your project doesn’t directly benefit me”.

And filing out bug reports that demonstrate the failures?

Another flaw in the open source world is that bugs don’t spontaneously fix themselves.

I wish someone who knew how to write these renderers woudl instead focus on making a production renderer. Mot of the time these unbiased/biased renderer are great at stills, and the make pretty pictures but they fall flat when it comes to the extensibility and robustness necessary for animation. Fast, accurate motion blur/dof, stability, speed, easy to extend (shaders), etc, these are the things I want to see in a renderer. Arnold form what I’ve heard has a great implementation of Mblur that almost rivals PRman. I haven’t used it, I don;t know if that is the case, but the creator clearly had a target production audience in mind and tailored his renderer to that environment. It also helps that he has Sony to test these features on. Frankly, i’d rather see a renderer that leaves the fancy unbiased features behind and instead focuses on the basics that make a renderer worth it in a real animation environment.

That’s not to say this renderer doesn’t look good but can you use it to a 4K rendered animation ith any reliability. Maybe thats not his goal, and that’s okay. I just wish that someone in the OSS community would take that challenge up. I thought Lux was it but so far they seem to be going the same way as all other renderers. Great for stills but not very practical for anything else.

Thank you lots dsavi, now I can investigate. I’d love to test this thing. :slight_smile: Maybe I should just do a manual install of the latest Nvidia drivers and try again. Need to get around to it…

Thanks again!

Well there is Aqsis, which is a very capable rendering engine, but it’s much too complicated for most users, not to say that those users aren’t smart enough, just that it takes a very long time to learn a tool as powerful and flexible as Aqsis.

I mention Aqsis because I’m using it in an animated short at the moment and it has all of the features and capabilities to produce feature films, well it needs a few improvements, speed-wise and feature-wise, but if we don’t support and nurture what we have we won’t really see much benefit, simply because the developer(s) won’t receive feedback from any production, big or small!

Nope, but seeing an OpenGl preview convert after 10 sec got my libido started.

Anyways, this is not about sampling algorithms alone, a LOT can be achieved through shaders for high realism and artistic freedom, that’s why I really hope that the next material/ shaders refactor (with nodes oriented BRDFS assembly and such) will take off after 2.6. There are plenty of examples of materials that has sampled BRDFS that are rendered with a scan line algo, but looks like something straight from Indigo.

Hi,

I’m the author of Mitsuba – I just found this thread after investigating where all the traffic on my Mercurial server came from :slight_smile:

To answer a few questions on this thread: Mitsuba is currently a “researchy” renderer. The main focus, at least for my part, is to create something that can render extremely difficult scenes where other techniques fail (due to fireflies, noise, …). I’m thinking of things like interior scenes with complex lighting, specular + highly glossy materials and volumetric interactions. The whole codebase is available in a decentralized repository and I’m happy to work with collaborators, so it is difficult to say how it will actually evolve in the future.

I’ve worked on this renderer for several years by now, and although this is of course a “students project”, I’m way to invested in it to even think of dropping it. So, it’s definitely here to stay for a while :). I’ve made it available at this point mainly so that other people working in computer graphics can use it as a shared foundation to develop, test and compare rendering techniques. My (perhaps a bit naive) vision is that as a new rendering technique is published (say, at SIGGRAPH), the authors also release a Mitsuba plugin so that others can more easily understand the merits and limitations compared to existing approaches.

While a community like blenderartists.org hasn’t exactly been on my radar so far, I’d be excited if the program was also useful to you. At the moment, things are still very rough regarding documentation, so I apologize if people had problems getting the program to run. An incomplete documentation draft can be found here for those who are interested: http://www.mitsuba-renderer.org/documentation.pdf

One general big obstacles is getting a scene from a program like Blender into the renderer. To minimize the pain in writing tons of exporters for every program in the world, I thought that it might be better to instead create a DAE importer for Mitsuba which (as far as I am aware of) most 3D packages can export these days. The “mtsimport” utility included with Mitsuba was only used with Maya so far, so I wouldn’t be surprised if it totally breaks down when given a DAE file exported from Blender.

That also brings me to a request: if you have Blender scenes which cannot be imported, which crash the renderer or otherwise cause problems (e.g. very slow convergence), I’d be grateful if you could send me a copy so that I can go bug-hunting.

In addition to that, I’m very interested in getting access to difficult-to-render-scenes where the author would also permit its use as an example in an academic research publication (with attribution of course). That way, the work involved in improving the renderer to handle it well would pay off to both involved parties.

I should add that one of the biggest pieces of this renderer is actually not available yet, since it currently still unpublished research. So if the program doesn’t render your scenes well enough, remember to check back in a few months.

Wenzel

Hi Jakob,
is there any chanche that Mitsuba will have a “studio” component to design/assign materials/lights/camera to a scene like Kerkythea or Maxwell do?

Any little anticipation to let us live without such a suspance? :o