Page 2 of 2 FirstFirst 12
Results 21 to 28 of 28
  1. #21
    Member SterlingRoth's Avatar
    Join Date
    Mar 2006
    Location
    Portland, OR
    Posts
    2,000
    Originally Posted by Evan Avast View Post
    Is there a way to "teach the renderer" to calculate where the image is converging to? For example, whith each passing sample, the image gets closer and closer to a specific point, kind of like Y getting closer to 0 as X tends toward infinite or minus infinite, in the function Y=(1/X). As far as I know (which is really not the far, I have a very limited knowledge on these topics), what Cycles does with each new sample is like we were increasing the value of X in said function. However, we are able to notice this asymptotic pattern on this funciton, so we know that at an hypotethical point at infinite, the value of Y will be 0. Can a computer learn how to notice these kind of convergent asymptotic patterns to predict, for example, how the final render would looks like after an infinite number of samples (which would be the closest thing to perfection that we can achieve)? Of course, the more samples you allow the renderer to actually calculate, the more realistic will be the predicted final image
    Yes, but it's really complex to do so, and most likely outside of the scope of blender development. Here is a link from some researchers at Disney that built a neural net to do this prediction based on a noisy image: https://www.disneyresearch.com/publi...ing-denoising/



  2. #22
    Originally Posted by Evan Avast View Post
    Is there a way to "teach the renderer" to calculate where the image is converging to? For example, whith each passing sample, the image gets closer and closer to a specific point, kind of like Y getting closer to 0 as X tends toward infinite or minus infinite, in the function Y=(1/X). As far as I know (which is really not the far, I have a very limited knowledge on these topics), what Cycles does with each new sample is like we were increasing the value of X in said function. However, we are able to notice this asymptotic pattern on this funciton, so we know that at an hypotethical point at infinite, the value of Y will be 0. Can a computer learn how to notice these kind of convergent asymptotic patterns to predict, for example, how the final render would looks like after an infinite number of samples (which would be the closest thing to perfection that we can achieve)? Of course, the more samples you allow the renderer to actually calculate, the more realistic will be the predicted final image
    Its 1/sqrt(X), so even worse. It's not that easy to predict, because of the noisy convergence. You will always miss something and the neighbor pixel wouldn't and that's basically the source of the noise. But you can do some temporal and spatial filtering (comparing values to the last frames and to the neighbor pixels) and smooth the noise. You can also remove outliers/fireflies. That's basically what the cycles denoiser does. Disney put it a step further and trained a neural network to do the denoising.
    But denoising isn't the holy grail. You still have paths that are difficult/impossible to find and a denoiser can't create information that isn't there. Bidirectional Path Tracing kind of solves it, Metropolis path tracing solves it better. But both are quite difficult to implement and the Metropolis algorithm is temporaly instable => You get moving blotches in a movie. That's the reason why most movie productions stick to the classic path tracing algorithm.
    Last edited by Tobi95; 06-Dec-17 at 12:47.



  3. #23
    Donating Member Alvaro's Avatar
    Join Date
    Mar 2002
    Location
    Catalonia - Spain
    Posts
    1,886
    Originally Posted by Evan Avast View Post
    The biggest problem would be that the majority of the rays emitted would never hit the camera, so they wouldn't affect the final render in any way.
    there are visibility rays connecting eye ray hits with light path hits. The problem is that first the whole scene needs to be sampled for usually localised effects, secondly you are coupling several sources of montecarlo noise toguether, propagating it from one into another algorithm. That's why bidirectional is even noiser than path tracing.


    Is there a way to "teach the renderer" to calculate where the image is converging to? For example, whith each passing sample, the image gets closer and closer to a specific point
    there is a variation of bidirectional called metropolis light transport which does more or less what you say. Anyway, the more restricted the signal, the more montecarlo noise will appear. In photon mapping and path tracing there are russian roulette algorithms to discard samples that does not significatively contribute to the final result for faster render times, but a the same time they are a source of montecarlo noise.



  4. #24
    Member ajm's Avatar
    Join Date
    Jul 2010
    Location
    Lincoln, NE
    Posts
    1,305
    I was just wondering if anyone had an approach to initial settings for SPPM? How can one guesstimate the required number of photons to fire into a scene and the initial radius? I realize this is a case by case basis, but a close approximation would do better than a blind shot and going really long on render time and finding you're far away from convergence. Convergence time for some people is really short, so there has to be some method to their madness.



  5. #25
    Member tyrant monkey's Avatar
    Join Date
    Oct 2007
    Location
    Windhoek, Namibia
    Posts
    5,216
    You could also give RenderMan a shot if I remember correctly it has an integrator that combines bidirectional path tracing with progressive photon mapping, vertex connection merging(PxrVCM).

    I could only get RenderMan working on a Ubuntu partition, I could never get it running under windows I would install and run it maybe once or twice and get a proper render out of it and then it would lock me out with some license BS which was weird because it was the free non commercial license. I fucking hated dual booting and so I went back to Cycles.

    For scenes where you want caustics it is a pretty good integrator (PxVCM) as it will converge faster then uni directional path tracing and it's production ready...

    P.S. the shaders in RenderMan are really good worth the pain of learning to use it.
    Last edited by tyrant monkey; 07-Dec-17 at 08:05.
    I have 500 bad drawings in me before the good ones. And I have 304 to go, here they are.



  6. #26
    Member ajm's Avatar
    Join Date
    Jul 2010
    Location
    Lincoln, NE
    Posts
    1,305
    RenderMan's caustics weren't that great when I was testing the different renderers. I screwed up the first time by having thin shadows on, but then turning them off didn't change them that much. I have to do more tests, though. Lux was number one followed by Cycles then Radeon ProRender.



  7. #27
    Member
    Join Date
    Sep 2012
    Posts
    2,841
    @ajm
    as you noted, SPPM settings are scene specific... here's the thread on Lux forums: SPPM and resume problems, maybe connected - also scene file set for SPPM is posted there

    if you'd like to get more help, make a thread there (describe the problem, post a scene file...)


    about PRman (& VCM)
    same impression here, nahhh... waiting to see what v22 will bring
    Last edited by burnin; 07-Dec-17 at 09:43.



  8. #28
    Member ajm's Avatar
    Join Date
    Jul 2010
    Location
    Lincoln, NE
    Posts
    1,305
    Originally Posted by burnin View Post
    about PRman (& VCM)
    same impression here, nahhh... waiting to see what v22 will bring
    That's if someone has picked up supporting Blender, since they have to do it on their own time outside of work. The plugin side of course. I'm a coding idiot, otherwise I would chip in.



Page 2 of 2 FirstFirst 12

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •