Cycles_path_guiding tests

Another couple of renders (4 minute equal time)

Vanilla Cycles (same settings as above)

With Path Guiding and Many Light Sampling enabled.

9 Likes

It stopped snowing! :smiley:

7 Likes

I watched the Blender Conference talk about path guiding. As far as I understood, it stores the incoming light distribution for each pixel. Did anyone measure how much more memory is used for it at the moment?

Mem and peak mem for the cottage scene with Vanilla Cycles are reading as 50m and 80m respectively on the render window.

Seems to be the same if you have path guiding and MLS enabled.

2 Likes

That’s a lot better than expected! Thanks for the info.

1 Like

@DeepBlender the incoming light distribution is not stored per pixel. it is stored in a 5D field across the whole scene. The field is represented using an adaptive spatial sub-division (3D) and stores a spherical representation (2D) for each subdiv region.

4 Likes

Thanks a lot for the clarification!
Does that mean the memory requirement depends on the geometry, materials and the lights in the scene?

Another test - not quite as dramatic - but PG + MLS does have lower noise, especially in the foreground. The light from the laser seems to be a bit more prominent on the ground too - and there are fewer fireflies.

Vanilla Cycles - 4 minute render

PG + MLS - 4 minute render

4 Likes

@Ace_Dragon and @Thesonofhendrix , these settings are just for debugging and should usually SHOULD NOT be changed by the user/artist.
The quadtree representation can be used to compare the fitting process of the other two VMM-based models. E.g., atm the quadtree fitting is way more simple and more robust for extreme cases (crazy caustics), which we are currently trying to fix. BUT the quadtrees can produce worse results compared to the VMM models in normal use cases. Especially in volumes, they only work robustly for isotropic media.

Also, using a guiding probability > 0.5 may work in your extreme caustic examples but is most likely to increase the noise level in all other scenes.

This is the reason why these settings are not directly visible to the user and might be removed in the future. They are dangerous if you do not know 100% what you are doing ;).

6 Likes

@moony, when you post equal time tests it would also help if you post the number of samples rendered at these time budgets. Performance can often be optimized :wink:

1 Like

I would - but that information seems to disappear once the render completes.

Exciting progress in the sampling department!
Yet more and more doing animations, I feel the real only thing that would change the game would be having temporal denoising. (You know for stills Oidn can really make miracles, especially when making single pass denoising in the compositor.)
A wish: some kind of avisynth/vapoursynth filter in the NLE for final export

1 Like

Thanks for the information. I guess when its all production ready there would be no need for such advanced settings anyway because it will be tuned to be as robust as possible for all scenarios?

I think theres a tick box in the view layer properties that burns in all the data like sample count and such. And in the output properties tab theres the metadata where you can burn all sorts into the image like render time, memory use, frame, etc. Which is pretty useful for testing.

1 Like

The physics of this is well over my head and i don’t fully understand it but i want to put out an idea that occurred to me in case its useful and hasnt been thought of. If when the path tracer renders the image, if in the background/internally it renders at a higher resolution does that result in more useful light paths and caustics being found and could that data then be brought back down to the lower resolution that the user is rendering at and get a better image result? i know people will say why not just render at a higher resolution in the first place, but im wondering if theres anything to be gained memory wise, or if it would be a nice feature that people might use sort of like super-sampled anti-aliasing??

It’s probably a bad idea but i wanted to put it out here just incase.

the path tracer already samples at the continuum level afaik. The resolution only comes into play once you turn that into the exact values of pixels on screen. There isn’t really a “resolution” as such to the underlying rendering algorithm.

(Though I suppose there is a resolution to the approximate light sampling distribution)

2 Likes

As it happens. this is a segment of a scene still rendering, now at 337 samples, and just glossy (well - a bit more than just glossy, but for these purposes), and is no better at all than without path guiding.
Screenshot_2022-11-06_16-18-09

Path Guiding in Cycles has not yet been enabled for glossy rays (which was mentioned specifically when the first testing build released). In general, things are working as they should since it is not actually worse than with OpenPGL disabled.

3 Likes

@Ace_Dragon is right, at the moment, PG is not enabled on materials with non-diffuse components.
If it is enabled the amount of guiding also depends on the guiding probability (which is 0.5) and the sampling weight of the diffuse component.

@Roken could you post the full image of the shot?

1 Like

I didn’t save it (it’s an old scene that I use to explore new features, or for testing).

But at least you and Ace_Dragon have clarified things for me, so thank you both.