TLRcam- Unbiased MLT renderer+improved/improvised MLT

More from the new MLT algorithm
I improved it a little, and it seems to work fairly well. Now all I need is an importer, to blender or some other program, and I will open source it- well, possibly with a copyright so I can prove that I wrote it, something like PBRT has.- all except for the new MLT code, which I’d prefer to keep a secret a bit longer.

Now tell me, is this an improvment worth pursuing? Or should I go ahead and implement the paper on adaptive multi-dimensional sampling that seems to show such great improvements (does anybody know of any renderers that use this?)

http://i42.tinypic.com/fmtc01.png
The image after 420 spp, I’m not going to say how long this took because it was only on one core and was rendered while I was using my computer intensively.

http://i43.tinypic.com/1tafyo.png
The new algorithm after 20spp, at about 10 minutes on a dual core. -Lprob .01-.99 -maxRej 10-1000

http://i42.tinypic.com/29pvmsj.png
the regular MLT algorithm with 20spp at about 10 miniutes, with an -Lprob .4 and maxrej of 500

the regular image’s lprob and maxrej was set to be what the new algorithm’s lprob and maxrej would average out to given thoes paremeters and scene. All MLT is based off of Keleman et al.'s robust mutation strategy paper, and paper on hybrid MLT.
blog- http://tlrcam.blogspot.com

Looks quite interesting.

Would be interested in a benchmark with data for samples per pixel per second or something like that - to compare it. Or in a render-time-comparison with Indigo.

well, I kind of did something like that already, and for a cornell box, with simple MLT, where mine had two mesh spheres in it, rather than two boxes, I was getting about 1/4 to 1/2 the speed of indigo. However, because i wrote this in Java, and because i didn’t pay much attention to major speed reduction methods, concentrating primarily on methods for less samples=higher convergence, it is really slow. Mostly I do not intend it to be a really competitive renderer (maybe one day), but more to be for research+learning.

What I’d be more interested in comparing is the convergence rate per spp for my renderer and indigo/lux/radium.

I’m also very interested in trying out my new algorithm in another renderer and seeing how it fares (I’m thinking lux/pane).

Too many renderers. Head go boom.

and I will open source it-

Do. And gather some friends to speed up the progress.

Why not join Farsthary and Broken in bringing GI to Blender Internal?

Too many renderers. Head go boom

I agree with this, it’d be better for Blender to have a scene render on 1 super renderer supporting everything Blender has (could be the internal renderer) then to have 10 renderers that only support a subset of features.

there is no one render engine to rule them all, with so many choices in render engines the best thing would be to get a render api

…and a bunch of monkey’s to code the “connection” between the renderer and the API.

what ever happens happens

but I think joining together with the others might be a good idea to combine
programing power / manpower.

this seems to be often the problem of open source applications.

render is in deep need for a modern and mature render engine and it seems
this area is the hardest one to work on because of the nature of its complexity.

I also agree instead of having many alternatives a universal internal engine which
also supports all of blender features would make a lot of sense.

blender is very powerful - it currently just cannot render it out adequately.

http://i44.tinypic.com/2qip8bl.png
-900 spp

http://i40.tinypic.com/muvj45.png
-300 spp path tracing- the corresponding is on my blog.

Part of the reason I did not start out trying to edit the internal renderer was because i did not feel i knew enough to make a relevant contribution.
The reason I am not editing the internal engine now is that I intend to use my renderer for research purposes, and its nice to have a renderer where you know every tiny detail, and can do things easily (java!).

All that I need now is some way to host the JAR (with experimental features), and some way to host the source (without the experimental features).
By open source I mean that anybody can look at the source- I do not mean CVS/SVN where others can edit the main source.

as it turns out, Brandeis apparently has hosting available, so once I figure out how to use that…

Ya know, if you were to use one of the major hosting sites like sourceforge the only people who would have cvs commit privileges would be the people you specifically gave commit privileges to. It isn’t like a wiki or anything where the vulgar horde can change things on a whim.

no problem man - I was just curious.

When it is for your own research and testing you could still build your own
blender version with your modified render engine :wink:

one of the beautiful things about writing your own engine is that you know every tiny detail. I’ve looked at the blender source code, and quite frankly, its a monster. when doing this stuff i prefer the simplest methods possible.
eventually I wouldn’t mind getting involved in lux or yafray though.

… Just posting here to say it’s good to see another renderer project… your MLT implementation seems to be quite a nice approach!.. you deffinitely have talent!.. And good to see your decision to Open Source part of your work.

And in my oppinion, i’d love to see you involved with Lux… i like a lot their working and frendly user feedback! (Yafaray y great too but i personally like Lux better).

Keep it up!
tuqueque.

it is a monster - that is why looking into it might be a good thing

to clean it up (hint hint hint)

I can only marvel about you guys being able to pull this stuff off.
Pure math describing images - amazing!

What about Sunflow? Written in Java had a lot of potential. produced very nice results needs a couple of additions like rendering with alpha channel and fake glass, passes even and it would be a great addition.

Biased engine I know, but far more to offer than another noisy slow unbiased one that’s great as a personal project but limited appeal and use otherwise.

Sunflow looked like it was going places another personal project that bit the dust.

true and than died again as many others …

thats why a perm staff of coders for suchs matters is so important as well.
see how much Blender evolved in other areas through time.

who said sunflow was dead?

every once and a while I borrow something from the sunflow source (usually something trivial, like my bounding box class, or loading hdr images)

Biased engine I know, but far more to offer than another noisy slow unbiased one that’s great as a personal project but limited appeal and use otherwise.

the technology capable of making unbiased renderers fast enough, and versatile enough for actual use has only been discovered recently, while many of the rasterizing algorithms are old, so lt is likely that unbiased renderers in the next couple of years are going to gain major speed and quality increases over traditional renderers. For example, I have yet to see this paper implemented in the wild, even though it would clearly provide major speed improvements to unbiased renderers.
http://graphics.ucsd.edu/~matthias/Papers/mdas.pdf
only with people writing their own renderers and doing their own thing will stuff like this ever get implemented in open source programs.

you know, all the little renderers out there that become open source and then die are what helps to fuel new renderers growing. somebody wants to implement an effect in their new renderer, but isn’t sure how to do it- goes to another open source renderer, allowing him to implement something new. each time somebody writes one of these and puts it up, it might have lots of the same old algorithms, but it might also have some new ones that will eventually make it to coders coding blender… or other major renderers. basically I’m saying that even if I’m not coding into blender directly, I might/will be indirectly helping blender, or other projects.

btw i have a website, and once i figure out if my jar’s are sealed safely, I’ll have the executable up. once i have finished debugging, i’ll put the source up.

http://i39.tinypic.com/oanber.png
fixed a bug, found a bug, didn’t fix that.
1000 spp, 4 hours- trippled the speed. this image is missing reflections from the hdri map in certain areas, but that has since been fixed.

Quite a speed boost!.. but makes me think if that reflections missing are the cause of that render boost… Are you sure those reflections “missing” are ok?.. I mean, isn’t that a new bug or something?