Nice renders and speedups. Are you sure you used the same spp numbers? The E-Cycles one looks actually cleaner, on the reflection is most visible.
Can’t wait to try that on my brand new 2070
@bliblubli I use windows 10 @janbauer the sample numbers are correct
this is just a very simple scene, mainly to see the noise difference.
I plan to test a more complex one to better see the Performance difference in heavy Szenes.
Well, it depends on the processor you have. The builds only improve OSL rendering, which is not the case if you use a GPU. In my test, it didn’t make a big difference with or without CPU. The problem is, when the GPU becomes so fast, your CPU may remove a part of the job and at first make it faster, but then the GPU has to wait for the latest CPU tiles to finish rendering, which kills the whole render time.
And it consumes much more energy, creates more heat and noise. So I just leave it on GPU only.
Already, with Vega64+1080Ti together ( I took a patch from brecht to get OpenCL and CUDA working together), and those speedups, it’s so damn fast that the BVH build time is often the biggest part of rendering.
I could get the CPU rendering 20% faster by making the builds with LLVM if customers ask.
What do you mean? It is compatible with the denoiser of course and also the AO trick I added for 2.8 (available in the buildbots. So if you use AO (about 2x faster) and half the spp (also 2x faster) with denoiser, it should be about 2x2x2 = 8x faster, let’s say 7 to be sure. I didn’t add the tricks which I use in the table above, to make it clearer what the optimizations brings.
The denoising process itself is already fast enough, at least in my work.
Not really. It’s listed as an unbiased renderer, and I’m guessing it’s because we can set it up to be visually unbiased and quite mathematically unbiased (russian roulette is said to be mathematically biased but visually unbiased iirc, and is out of user control).
In practical terms, it would usually be visually biased from limited GI bounces, light threshold setting, AO hacks, MIS, light portals, caustic rays etc. Pretty much any setting we do to make it go faster will introduce slight bias, some more visual than others. Turning off caustics (without even fake compensation) is a sure way to make the image darker, because all glossy rays are terminated for lighting (everything has fresnel, right? - that’s the downside).
Considering the ridiculous amount of samples required for caustics to converge (no blur allowed I guess), using Blender in a very unbiased way, appears near impossible at least if physically plausible (fresnel -> diffuse/glossy) materials are used. I guess you could do unbiased cornell box using diffuse only materials.
So in theory, I think Blender can be very unbiased, but nobody would use it that way in practical use.
Another way to tell if an integrator converges is to subtract one image from the other and paint positive pixels in a different color than negative ones:
There are no caustics in this scene and the geometry is very simple, so this scene should be possible to render in an unbiased way. I don’t know about the render settings used so all I can say is that at least one of these two images is biased.
I admit that both of the images are pretty and I could not tell the difference by looking at each of them. It seems that E-Cycles used somewhat shorter light paths, given that the direct light is brighter and that the back of the wall is darker.
// bookworm note: Multiple Importance Sampling is an unbiased technique (as long as it combines unbiased estimators). Russian roulette is a somewhat risky way to get rid of the bias from path length because any limit to path length makes the integrator biased.
This is pretty much a worst-case test for raytracing, lots of Glass, caustics, reflections from caustics, volumetric materials and many small light sources.
Again, GPU only (CPU has no performance improvements over the normal Blender builds).
Windows 10
Nvidia GTX 980 with 4GB
900 Samples
Normal Cycles: Time: 40:04.81 Peak Memory: 643.69M
The Speedup is amazing
E-Cycles looks indeed a lot cleaner with the same sample values especially on the glass.
Cycles is always biased. As soon as any render engine is tracing shadow rays, it is actively looking for light sources and therefore it’s biased. This doesn’t mean that this is bad in any way, it’s actually smart.
Interesting guesses, but false If I used shorter paths, this would indeed be biased. It’s what my AO patch that will be in final 2.8 does. The only way to know is to do the course
Cleaner, faster, what to ask more ? Even faster! the new build for RTX brings up to 12% on top, @Komposthaufen you are welcome to install it and report.
no optix, it’s just that Nvidia fixed cuda 10, so it now supports older cards with Cycles. So I compiled with it. The 1080Ti is 6 to 12% faster with it. Sad it does the contrary on older generation. I’ll continue to provide both then, 9.1 and 10.0 cuda kernels.
I don’t want to prove nothing.
For me, it’s as simple as the fact that of 24h cycle the bright part is the day and the dark is the night — visual difference, while same values are used (especially with one and the same engine) generally mean something is wrong.
The last example (ayreon2) show it even more.
Trying to sell the same “tech” to any other render engine & developers should be ready for a war
Do you mean it’s bad because you have less noise for the same sample count so it changes the image? Try 2.78 and latest 2.78 and compare images of the same render, you can get same differences due to changes in the russian roulette by Brecht. Or do you think Cycles should keep it’s noise level and noise pattern forever so that those image comparison stays black?