I guess he meant more something like: perfection doesn’t exist, do something that people like, biased or not.
Would it be possible to support RTX cards?
This article by Christopher Nichols says it all.
It is and I will
Thank you @bliblubli for giving me this free year of updates.
I have done a few tests and here are the results.
these images are rendered exclusively on the GPU
GPU: Nvidia GTX 980
Normal Cycles: Samples: 576 Time: 07:03.21
E-Cycles: Samples: 576 Time: 04:29.48
Normal Cycles: Samples: 5041 Time: 05:42.32
E-Cycles: Samples: 5041 Time: 03:09.41
Nice renders and speedups. Are you sure you used the same spp numbers? The E-Cycles one looks actually cleaner, on the reflection is most visible.
Can’t wait to try that on my brand new 2070
Nice to see it works for you. Which version did you use (Windows/Linux/Linux_w)?
RTX Version available now on the product page
@bliblubli I use windows 10
@janbauer the sample numbers are correct
this is just a very simple scene, mainly to see the noise difference.
I plan to test a more complex one to better see the Performance difference in heavy Szenes.
Zotac 2070 mini on windows 10:
Master bmw: 91s
Your build: 45s
Pretty good speedup, is this gpu only? What if I’d use gpu+cpu or cpu only…?
And what about Blender denoiser.
This is a bit confused. Cycles is unbiased ?
Well, it depends on the processor you have. The builds only improve OSL rendering, which is not the case if you use a GPU. In my test, it didn’t make a big difference with or without CPU. The problem is, when the GPU becomes so fast, your CPU may remove a part of the job and at first make it faster, but then the GPU has to wait for the latest CPU tiles to finish rendering, which kills the whole render time.
And it consumes much more energy, creates more heat and noise. So I just leave it on GPU only.
Already, with Vega64+1080Ti together ( I took a patch from brecht to get OpenCL and CUDA working together), and those speedups, it’s so damn fast that the BVH build time is often the biggest part of rendering.
I could get the CPU rendering 20% faster by making the builds with LLVM if customers ask.
What do you mean? It is compatible with the denoiser of course and also the AO trick I added for 2.8 (available in the buildbots. So if you use AO (about 2x faster) and half the spp (also 2x faster) with denoiser, it should be about 2x2x2 = 8x faster, let’s say 7 to be sure. I didn’t add the tricks which I use in the table above, to make it clearer what the optimizations brings.
The denoising process itself is already fast enough, at least in my work.
Not really. It’s listed as an unbiased renderer, and I’m guessing it’s because we can set it up to be visually unbiased and quite mathematically unbiased (russian roulette is said to be mathematically biased but visually unbiased iirc, and is out of user control).
In practical terms, it would usually be visually biased from limited GI bounces, light threshold setting, AO hacks, MIS, light portals, caustic rays etc. Pretty much any setting we do to make it go faster will introduce slight bias, some more visual than others. Turning off caustics (without even fake compensation) is a sure way to make the image darker, because all glossy rays are terminated for lighting (everything has fresnel, right? - that’s the downside).
Considering the ridiculous amount of samples required for caustics to converge (no blur allowed I guess), using Blender in a very unbiased way, appears near impossible at least if physically plausible (fresnel -> diffuse/glossy) materials are used. I guess you could do unbiased cornell box using diffuse only materials.
So in theory, I think Blender can be very unbiased, but nobody would use it that way in practical use.
Another way to tell if an integrator converges is to subtract one image from the other and paint positive pixels in a different color than negative ones:
There are no caustics in this scene and the geometry is very simple, so this scene should be possible to render in an unbiased way. I don’t know about the render settings used so all I can say is that at least one of these two images is biased.
I admit that both of the images are pretty and I could not tell the difference by looking at each of them. It seems that E-Cycles used somewhat shorter light paths, given that the direct light is brighter and that the back of the wall is darker.
// bookworm note: Multiple Importance Sampling is an unbiased technique (as long as it combines unbiased estimators). Russian roulette is a somewhat risky way to get rid of the bias from path length because any limit to path length makes the integrator biased.
Here is a more complex scene.
This is pretty much a worst-case test for raytracing, lots of Glass, caustics, reflections from caustics, volumetric materials and many small light sources.
Again, GPU only (CPU has no performance improvements over the normal Blender builds).
Nvidia GTX 980 with 4GB
Normal Cycles: Time: 40:04.81 Peak Memory: 643.69M
E-Cycles Time: 25:13.23 Peak Memory: 655.70M
The Speedup is amazing
E-Cycles looks indeed a lot cleaner with the same sample values especially on the glass.
Cycles is always biased. As soon as any render engine is tracing shadow rays, it is actively looking for light sources and therefore it’s biased. This doesn’t mean that this is bad in any way, it’s actually smart.
Interesting guesses, but false If I used shorter paths, this would indeed be biased. It’s what my AO patch that will be in final 2.8 does. The only way to know is to do the course
Cleaner, faster, what to ask more ? Even faster! the new build for RTX brings up to 12% on top, @Komposthaufen you are welcome to install it and report.