nice find, I went through the entire site. Nice explanation with images of “irradiance cache” I know it’s being used in BI to speeds ups, but you can see how the result looks almost painted or smeared. it’s because it’s doing samples here and there and blends.
but is overture pass etc, just some extra render pass to make it smoother?
maybe it’s time for BI to become unbiased, even if it makes helluva longer render times. computers evolve.
but for the devs implementing stuf like Global Illumination it’s maybe easier to implement and find papers on it rather than these speedy half-solutions.
Although I’m very much afraid of it being just another students project that will get abandoned after a few years, it definitely has an awesome set of features!
Nice, but unbiased rendering won’t improve Blender’s ubermaterial. It would still suck as much as it does today.
Wow that looks awesome! Luxrender has some competition. Hope there’s a Blender exporter.
E: How to build on Linux:
sudo apt-get install libxerces-c2-dev build-essential libglewmx1.5-dev wget https://www.mitsuba-renderer.org/hg/mitsuba/archive/tip.tar.bz2 tar -xf mitsuba (Press tab a couple of times) cd mitsuba (Tab tab) cp config/config-linux.py config.py ./tools/build-sh.py
Since I build Blender myself I’m sure there are more libraries that you’ll need if you don’t have all the libs that I have. It will notify you if you don’t have them, however.
./mtsgui: error while loading shared libraries: libcore.so: cannot open shared object file: No such file or directory
Okay so I get this error. Apparently I need to do something with my LD_LIBRARY_PATH but I don’t know what.
Looks very promsing even though it’s a one man project (those tend to be abbandon after a while)… the guy is Ph.D. Student from Cornell University, highly interesting…
Looks very promising indeed, one man project and it surpasses Yafaray and Lux in many areas already.
Both unbiased and biased options, has subsurface scattering, volumes, DMC sampling (v-ray uses a similar approach).
The volume and scattering capabilities look impressive:
Add render-time displacements, motion blur and perhaps OSL (open shading language), and we may have a GPL alternative to the Arnold renderer
Oh and AOV’s would be a must have aswell:
do you guys experience orgasms every time the word ‘unbiased’ is pronounced?
Well, the topic title should really be “New opensource unbiased/biased renderer” because it offers both.
This looks really good.
I have to tip my hat to the developer, hopefully this will really mature into a great rendering package.
It would be good to have a talk with the developer and see where he plans to take this, perhaps we as an artist community can help him test and improve his rendering engine, and hopefully, as it grows this will bring a few more developers on-board to develop features that support Blenders features, such as hair and other things like an integrated plug-in.
Regardless of what your thoughts on the current rendering engines available, it never hurts to have more
You guys do know that both Luxrender and Yafaray offer “biased” rendering algorithms as well as “unbiased”? Mitsuba is an impressive project for sure, hopefully he will succeed in building a community around it.
Biased is no good for animations, which is one of the features of Blender.
Most people confuse progressive rendering with unbiased so plenty people think Lux is only unbiased and Yafaray is biased. Most would not know that it depends on whether you use something like photon mapping(biased) or path tracing(unbiased).
People are not convinced something is unbiased until the see the whole fuzzy picture will get clearer with time effect.
just incase the usual unbiased is not great for animation chestnut pops up, I would that it worked out great for sony, Cloudy with a chance of Meatballs was rendered with an unbiased render engine ‘Arnold’ read on …http://tog.acm.org/resources/RTNews/html/rtnv23n1.html#art3
Let’s not turn this into a Biased vs Unbiased rendering thread
What it boils down to is this; The more open source rendering engines we have available to Blender the better for all of us, and from what I’ve seen in the past, supporting Blender and it’s community usually leads to pretty good things for the developer(s). Be that financial support from the users, or being picked up by major companies.
Someone should ask this german wunderkind, wheteher he could cooperate with blender.
How do you mean? Create an exporter? I think that’s a task more sutiable for a blender user with some python skills… he’s better off developing the renderer
Anyhow, this is just a one man show atm, it dosn’t have a strong developer base like lux or yafaray and could be discontinued at any point… just look what happend to sunflow…
I always did wonder what happened to Sunflow, it just seemed to disappear. Of course I never heard of Sunflow until after development stopped.
I suppose you meant to say UNbiased.
If you read on the page in the far right corner of the front page, it says it features DMC sampling (Deterministic Monte Carlo), so this particular renderer probably doesn’t suffer from this.
Anyway, as above poster mentioned, other renderers have overcome this a long time ago, see Arnold or V-Ray to mention a few.
This looks completely awesome. Can’t wait until someone figure out a way to export blender scenes into this
Yes, another renderer exporter, that’s what we don’t need: another waste of energy while Blender needs other than that. Sorry folks, don’t want to be rough but there are already too many renderer out there and no one seems to be really perfect and fully integrated with Blender. It looks a good exercise for the devs., but why wasting time on this stuff?
Phoenix: This one looks pretty solid, well optimized and seems to be well designed. Pretty impressive renderer at such an early stage, if someone wants to spend their energy on making an exporter they’re more than welcome, we’re probably talking about normal users with some python knowledge, it’s not like the core devs are wasting energy here.
Looks like a pretty bright developer so he’s probably made sure it’ll be easily expanded for implementing more features later on, for example micropolygon displacement, motion blur et al.
But that remains to be seen.
Anyways, blender internal render as has been stated many times will require a major rewrite to “modernize” (for example stochastic rasterization for motion blur and DOF) wich is virtually impossible due to time and manpower constraints. This has been brought up on the mailing list and also on these forums before. And would most likely require a team of programmers working normal 8 hour days for probably at least a year to complete. And even if BF had a team of paid programmers, there would be other things demanding their attention, it simply isn’t worth the effort to throw all available resources on rewriting the renderer while everything else gathers dust
And it doesn’t fit as a GSoC project either due to the huuge amount of work required, one summer for one person doesn’t even begin to scratch the surface
So it doesn’t hurt to look at other alternatives