Cycles, Luxrender, Mitsuba - render engine comparison round 2

Hi guys, some may remeber my old post that compared Lux, Cycles, Nox, Mitusba, Yafaray.
It was 1 year ago. So here is new render engine comparison.

so no gpu was used on cycles?

so cycles win.

A comparison rendering with CPU cycles has no sense for me.
For example my Cycles GPU is 15-20x faster than CPU.

It’s a matter of perspective; everyone can render with the cpu - gpu: depends on what we have.

@JoseConseco: Nice comparison, thanks for that!

Could you try with the Non Progressive Integrator in Cycles? That is still Path tracing, but it can reduce noise a lot. See here: http://dingto.org/?p=690 You must use an SVN build for that though.

Also, did you enable “Multiple Importance Sample” (“Sample as Lamp” in earlier builds) in the World Settings for the HDR in Cycles?

what writers block said.

My question was just out of curiosity.

I have no Nvidia card so no GPU. Plust it would be now fair to test GPU vs CPU. It is pure Path Tracing vs PT, without any tricks like no caustic and stuff. I can do no cautic thought to see is it any better.

DingTo - I was sure I enabled MIS for HDR light, but now I seems it was off. I will re-render. About non progressive render - do I leave all samples at 1, and AA at 4 ? Or you want me to change something?

@JoseConseco:
The benefit of Non Progressive is, that you can adjust samples better.
128 AA Samples, and everything else at 1 would work, but is probably slower than 128 Progressive samples.

So try something like 32 AA, then 4 Diffuse/Glossy/Transmission Samples. :slight_smile:
The Samples are multiplied, so for example 32AA * 4 Diffuse = 128 Samples for Diffuse Materials.

Edit: Anyway, keep this as a separate test.
It would be cool to see “Progressive” but with HDR MIS enabled first. :slight_smile:

Well try the SPPM integrator in Luxrender in a more complex light (specular-specular reflection) and geometry scene and Luxrender gonna beat them all.

The sobol QMC sampler of mitsuba seems quite new.
I don’t know if it is a correct omparaison of pathtracing methods.
I am sure that jeanphi can explain what settings could improve luxrender render.
Maybe it looks so noisy because the luxrender image is quasi-exclusively made of the shadowed part of the car.
For an only CPU comparaison, it is a pity to have no yafaray test, too.
Settings looks like he made tiled rendering with cycles and luxrender.
But did he make a tiled or a progressive rendering with mitsuba ?

On the Lux version, did you by chance stop Lux by using haltspp=128? If you use that with lowdiscrepancy, it would actually cause the sampler to “go around again” producing a 256-sample render. In any case, I’d recommend you use the metropolis sampler instead. While LD will get more Cycles-like behavior, there’s no reason to actually use it in that scene.

Also, does that time include loading time? The SQBVH accelerator takes A LONG time to build if you have a more complex scene like this, you might want to try with the normal QBVH accelerator.

I did also a test and mitsuba was more clear than cycles…
Why? The answer is simply:

This is because mitsuba mix every sample to many neighbour-pixels and cycles only to one pixel.
This produce faster more smooth results in mitsuba.
and this is how it should work.

DingTo - I added bunch of cycles render at the end, but I guess this is not what you expected. Bit too much combinations of setups for my taste. That is why I love mitsuba, just one slider with sample count.

zeauro - all renders were tiled. sobol QMC sampler in mitsuba give slightly better results, than LDS but not much. It is not like switching this sampler, made render 2x worse or somthing. Believe me it is just part os mitsuba sucess. Maybe what TS1234 says is the real secret, who knows.

J_the_Ninja - well I did metropolis and QBVH. Not too much changed. I paused this time on 13 minutes.

TS1234 - maybe if you implement this in cycles yourself, we could see the speed boost in cycles too.

can someone render this on GPU?

Interesting comparison. Could you perhaps publish the full images for all three?

If you want to check that it’s the filtering on many neighbour pixel, with LuxRender you can try to increase the filter xwidth and ywidth and see if that improves things. You can also try LuxRender lowdiscrepancy sampler with 128 samples per pixel and compare, it might be better than metropolis.

That was actually the original test, he changed it to MLT from my suggestion above. The result looked about the same, but took 18 minutes instead of 13.