# Thread: Bidirectional Pathtracing and Caustics

1. Originally Posted by Evan Avast
Is there a way to "teach the renderer" to calculate where the image is converging to? For example, whith each passing sample, the image gets closer and closer to a specific point, kind of like Y getting closer to 0 as X tends toward infinite or minus infinite, in the function Y=(1/X). As far as I know (which is really not the far, I have a very limited knowledge on these topics), what Cycles does with each new sample is like we were increasing the value of X in said function. However, we are able to notice this asymptotic pattern on this funciton, so we know that at an hypotethical point at infinite, the value of Y will be 0. Can a computer learn how to notice these kind of convergent asymptotic patterns to predict, for example, how the final render would looks like after an infinite number of samples (which would be the closest thing to perfection that we can achieve)? Of course, the more samples you allow the renderer to actually calculate, the more realistic will be the predicted final image
Yes, but it's really complex to do so, and most likely outside of the scope of blender development. Here is a link from some researchers at Disney that built a neural net to do this prediction based on a noisy image: https://www.disneyresearch.com/publi...ing-denoising/

2. Originally Posted by Evan Avast
Is there a way to "teach the renderer" to calculate where the image is converging to? For example, whith each passing sample, the image gets closer and closer to a specific point, kind of like Y getting closer to 0 as X tends toward infinite or minus infinite, in the function Y=(1/X). As far as I know (which is really not the far, I have a very limited knowledge on these topics), what Cycles does with each new sample is like we were increasing the value of X in said function. However, we are able to notice this asymptotic pattern on this funciton, so we know that at an hypotethical point at infinite, the value of Y will be 0. Can a computer learn how to notice these kind of convergent asymptotic patterns to predict, for example, how the final render would looks like after an infinite number of samples (which would be the closest thing to perfection that we can achieve)? Of course, the more samples you allow the renderer to actually calculate, the more realistic will be the predicted final image
Its 1/sqrt(X), so even worse. It's not that easy to predict, because of the noisy convergence. You will always miss something and the neighbor pixel wouldn't and that's basically the source of the noise. But you can do some temporal and spatial filtering (comparing values to the last frames and to the neighbor pixels) and smooth the noise. You can also remove outliers/fireflies. That's basically what the cycles denoiser does. Disney put it a step further and trained a neural network to do the denoising.
But denoising isn't the holy grail. You still have paths that are difficult/impossible to find and a denoiser can't create information that isn't there. Bidirectional Path Tracing kind of solves it, Metropolis path tracing solves it better. But both are quite difficult to implement and the Metropolis algorithm is temporaly instable => You get moving blotches in a movie. That's the reason why most movie productions stick to the classic path tracing algorithm.

3. Originally Posted by Evan Avast
The biggest problem would be that the majority of the rays emitted would never hit the camera, so they wouldn't affect the final render in any way.
there are visibility rays connecting eye ray hits with light path hits. The problem is that first the whole scene needs to be sampled for usually localised effects, secondly you are coupling several sources of montecarlo noise toguether, propagating it from one into another algorithm. That's why bidirectional is even noiser than path tracing.

Is there a way to "teach the renderer" to calculate where the image is converging to? For example, whith each passing sample, the image gets closer and closer to a specific point
there is a variation of bidirectional called metropolis light transport which does more or less what you say. Anyway, the more restricted the signal, the more montecarlo noise will appear. In photon mapping and path tracing there are russian roulette algorithms to discard samples that does not significatively contribute to the final result for faster render times, but a the same time they are a source of montecarlo noise.

4. I was just wondering if anyone had an approach to initial settings for SPPM? How can one guesstimate the required number of photons to fire into a scene and the initial radius? I realize this is a case by case basis, but a close approximation would do better than a blind shot and going really long on render time and finding you're far away from convergence. Convergence time for some people is really short, so there has to be some method to their madness.

5. You could also give RenderMan a shot if I remember correctly it has an integrator that combines bidirectional path tracing with progressive photon mapping, vertex connection merging(PxrVCM).

I could only get RenderMan working on a Ubuntu partition, I could never get it running under windows I would install and run it maybe once or twice and get a proper render out of it and then it would lock me out with some license BS which was weird because it was the free non commercial license. I fucking hated dual booting and so I went back to Cycles.

For scenes where you want caustics it is a pretty good integrator (PxVCM) as it will converge faster then uni directional path tracing and it's production ready...

P.S. the shaders in RenderMan are really good worth the pain of learning to use it.

6. RenderMan's caustics weren't that great when I was testing the different renderers. I screwed up the first time by having thin shadows on, but then turning them off didn't change them that much. I have to do more tests, though. Lux was number one followed by Cycles then Radeon ProRender.

7. @ajm
as you noted, SPPM settings are scene specific... here's the thread on Lux forums: SPPM and resume problems, maybe connected - also scene file set for SPPM is posted there

if you'd like to get more help, make a thread there (describe the problem, post a scene file...)

same impression here, nahhh... waiting to see what v22 will bring

8. Originally Posted by burnin
same impression here, nahhh... waiting to see what v22 will bring
That's if someone has picked up supporting Blender, since they have to do it on their own time outside of work. The plugin side of course. I'm a coding idiot, otherwise I would chip in.

Page 2 of 2 First 12

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•