Cycles Development Updates

Maybe this is interesting to push Cycles caustics rendering while maintaining rendering speed?

9 Likes

From the paper - it sounds like it’s an unbiased method. Would be good to have better caustic rendering in cycles. Was always a shame the MLT patch didn’t make the cut.

8 Likes

Yeah, I also still regret that. :cry:

1 Like

Lukas Stockner was also not quite as experienced with rendering algorithms (and with the design of Cycles as a whole) as he is now.

As for this new form of Monte-Carlo, it looks quite promising as it looks like it will work well with both OIDN and animation (as it does not appear to create low-frequency noise patterns in other areas). The same would be true for working well with adaptive sampling.

3 Likes

More than specific to caustics rendering, this paper looks like a slightly better sampling algorithm, doesn’t it?

2 Likes

Yes. Although I can’t judge the technical side of the paper, the overall impression of rendering efficiency is promising. I hope Brecht / Lukas see possibilities for a Cycles implementation.

3 Likes

Caustics aside, the glass looks a lot cleaner

1 Like

Look at their test results here
https://langevin-mcmc.s3.us-east-2.amazonaws.com/interactive-viewer/index.html

Looks like everything is a lot cleaner :wink:

5 Likes

These types of papers are always exciting! Of course implementing them into a production renderer is always more difficult than it seems. Evidence of course of Lukas’ MLT patch. Without going too deep on it the method does seem somewhat similar in nature to MLT so it could possibly have the same downsides that MLT did: longer learning time for simple scenes, more inconsistent noise in animation and sudden “jumps” in cleaning up image artifacts. Just geussing but it’s a bit early to assume this is the silver bullet is all I’m saying.

That said I think there are a few techniques that are “battle proven” in professional renderers that could benfit cycles. Path guiding in particular complements standard path tracing quite well. As well as fancy next event estimation techniques. I particularly like the implementation in RenderMan with the pxrunified integrator from a user experience standpoint. It basically acts as a standard path tracer but you can turn on manifold walk nee and specify which lights and surfaces to focus caustics on. https://rmanwiki.pixar.com/display/REN23/PxrUnified

5 Likes

Actually looking those renders I didn’t like the noise quality. Of course they’re better, cleaner images. But as you say, for the “battle proof” looks like the even noise of simple path tracing is more reliable. And I’m also thinking about denoisers.
Does anyone know if OIDN (or Optix) would work out of the box or it should be re-trained for such a MLT-ish noise pattern?

That’s difficult to say, especially because the training data from both is not publically available.

OIDN (and I believe Optix) work fine on LuxCore with Path/MLT and Bidir/MLT, so the same should carry across to Cycles

When it comes to neural networks, speaking of “should” often doesn’t work. Neural networks can have very unexpected knowledge gaps. It may also fail miserably. The only way to know whether it works is to try it out.

2 Likes

Even without the actual data we don’t know a whole lot and they refused to be coaxed earlier this year when I tried asking :slight_smile: https://github.com/OpenImageDenoise/oidn/issues/51

  • We don’t know what renderers were used to train OIDN so far
  • We don’t know how many scenes were used or the content of those scenes (did they contain subsurface scattering, volumetrics, motion blur, transparency, caustics, hair, etc. etc.?)
  • We don’t know the sample counts or sampling patterns used when generating the test scenes (e.g. is denoising quality affected by using PMJ vs SOBOL?)

That it works so well in blender so far is pretty amazing.

1 Like

Is the blender 2.91 windows build 's builds image viewer is broken? I am bot able to view any images in texture paint panel, uv editor panel or image viewer. I know it is under heavy development. Still :grinning:

I have created an open dataset for Cycles and provide it on request, which happened 4-5 times (due to many changes this is currently on hold though). However, I don’t know whether Intel or Nvidia requested it and are actually using it for the training.

I was reading this about Cycles + displacement memory optimization link

Just realized that Blender was computing all objects in my scene while they were hidden, they weren’t excluded from render.

Any news if this will be fixed in future developments ?

If not, which render engine do you recommend using that allows microdisplacement to be rendered without memory problems?

Thx in advance

If you can afford it try the corona renderer. Since version 5 IIRC it has a 2.5D displacement which optimizes speed and memory usage.

1 Like

Pretty much every other render engine does microdisplacement better than Cycles, but I’ve been using Octane lately and it’s insane how fast and memory friendly it is

5 Likes

The Adaptive Subdivision already does some optimization based on the camera view, using a lower dicing rate for object that are far or outside of the view.
It’s probably object-based, so if your object is huge like the moon in your example, it sure isn’t enough. But slicing the moon in several objects could potentially help already.

Cycles really needs better memory management with texture tile caching and smarter displacement for GPU rendering. It’s crazy to run out of VRAM so quickly even with a 12GB video card.

5 Likes