Page 12 of 13 FirstFirst ... 210111213 LastLast
Results 221 to 240 of 245
  1. #221
    Member
    Join Date
    Oct 2011
    Location
    Stockholm
    Posts
    219
    I see what you mean. The first advantage you have showed it pretty well: you don't have to think about the number of samples per scene anymore. You can max it for every scene, and rely on the adaptive sampling to render the image to the accepted amount of noise. So your adaptive sampling can be saved in your start scene and it would work for every scene you would make, without exploding render times.

    The second one is maybe not work for your scenes, but it will use less samples for tiles that are less noisy, so it can only be faster. Even if it's only 5% faster, it can be a big time saver for animations.



  2. #222
    Originally Posted by tompov View Post
    I've made many tests with complex and simple scenes but it seems that I'm missing something !?
    I can't see an advantage in Adaptive Sampling method.
    Here is a comparison between AS and noAS based on a simple scene by equal rendertimes.
    AS you see I can produce equal graininess in equal rendertimes without AS.

    Attachment 490969

    So, what I'm making wrong ?

    Greetings
    Tom
    IMHO if you want to see the advantages of AS try this in a production scene of an interior for example, and Im not sure if this AS has hair support, but if it has it... try a scene with lots of hair, you can try for example the Goosberry test scene and see if this makes that scene faster.

    But you will hardly see any benefit in simple exterior scenes, adaptive sampling benefits complex scenes with lots of noise I think.

    Cheers.
    Last edited by juang3d; 11-Jul-17 at 11:08. Reason: Comment



  3. #223
    Member jar091's Avatar
    Join Date
    May 2017
    Location
    Ostrava, Czech Republic
    Posts
    8
    Originally Posted by tompov View Post
    Hm, yes, I can't see an advantage because the only relevance is rendertime in relation to graininess.
    Perhaps a bad example scene ... but not the only one.
    Tom
    Hi, I think you have too big values of the Norm and Step. Could you try smaller values (for example: N=0.001, S=10)?

    Here I added some information about how it works: http://blender.it4i.cz/research/adap...ndered-pixels/

    I hope it helps for more understanding:-)



  4. #224
    Member jar091's Avatar
    Join Date
    May 2017
    Location
    Ostrava, Czech Republic
    Posts
    8
    Originally Posted by Ace Dragon View Post
    For the next version of the patch, there was a good idea posted here about mapping the norm values along a logarithmic curve (similar to filmic) or even a simple sRGB curve.

    I have tried it on a difficult interior scene and right now it has the pitfalls of basing the norm/error values on raw (un-mapped) color data. Those pitfalls being how brighter, easy to light areas take far longer to go through than they should (noting how it can take a minute or two of further rendering after convergence is reached) and how darker/shadowed regions see a fair amount of noise left over.

    A further clarification on the issue with bright areas, attached is a simple setup that really illustrates the issue and why the norm value re-mapping is needed (the tiles converge quickly, but they don't actually stop until a while later).
    Attachment 490813

    You seem to have a good base, now let's turn it into something spectacular.
    Thank you for tips. I will try it:-)



  5. #225
    Member
    Join Date
    Oct 2012
    Location
    UK, England
    Posts
    542
    Hey Milan,

    Some really nice other filtering work has just been released (aimed at realtime) but could work well with cycles and very low sample rates. This first paper uses only 1 SPP but still has impresive results (not perfect by any means but what do we expect for 1 SPP), with 32 or 64 SPP I think this could be very interesting as also has support for animation.

    https://www.cs.dartmouth.edu/~wjaros...17towards.html

    Images comparisons: https://www.cs.dartmouth.edu/~wjaros...omparison.html

    Example:
    mara17towards-teaser.png

    There is also source code provided

    This Paper is yet to be released but looks very interesting:

    Spatiotemporal Variance-Guided Filtering: Real-time Reconstruction for Path Traced Global Illumination
    http://cwyman.org/papers.html#

    Image:



  6. #226
    Member lsscpp's Avatar
    Join Date
    May 2006
    Location
    Firenze
    Posts
    2,057
    the tecnique doesn't look impressive to me. A lot of smudging and the shadows are gone.
    https://www.cs.dartmouth.edu/~wjaros...omparison.html
    Pictures in tis page are heavily biased. I think there's more potential in Lukas work for stills and hi-res renders. The smoothness of blurring is really good though
    Everything's relative. Even saying "Everything's relative".



  7. #227
    Member
    Join Date
    Oct 2012
    Location
    UK, England
    Posts
    542
    Originally Posted by lsscpp View Post
    the tecnique doesn't look impressive to me. A lot of smudging and the shadows are gone.
    https://www.cs.dartmouth.edu/~wjaros...omparison.html
    Pictures in tis page are heavily biased. I think there's more potential in Lukas work for stills and hi-res renders. The smoothness of blurring is really good though
    You have to keep in mind these are using only one sample per pixel mate, With higher samples it should work good. For example the first paper used LWR (which lukas built his system on) from the authors implementation to measure there test's against and the new technique was seen as superior.

    But these versions could also be used for quick viewport renders too as are realtime based versions.
    Last edited by 3DLuver; 12-Jul-17 at 12:00.



  8. #228
    Whre can i find a version with scrambling distance, all actual versions dont have this feature anymore



  9. #229
    Member
    Join Date
    Oct 2011
    Location
    Stockholm
    Posts
    219
    Windows Builds from 3dLuver have it: https://blenderartists.org/forum/sho...=1#post3215057



  10. #230
    Member Mobiledeveloper's Avatar
    Join Date
    Dec 2016
    Location
    Poland
    Posts
    115
    Originally Posted by 3DLuver View Post
    Hey Milan,

    Some really nice other filtering work has just been released (aimed at realtime) but could work well with cycles and very low sample rates. This first paper uses only 1 SPP but still has impresive results (not perfect by any means but what do we expect for 1 SPP), with 32 or 64 SPP I think this could be very interesting as also has support for animation.

    https://www.cs.dartmouth.edu/~wjaros...17towards.html

    Images comparisons: https://www.cs.dartmouth.edu/~wjaros...omparison.html

    Example:
    ....

    There is also source code provided

    This Paper is yet to be released but looks very interesting:

    Spatiotemporal Variance-Guided Filtering: Real-time Reconstruction for Path Traced Global Illumination
    http://cwyman.org/papers.html#

    Image:
    this is awesome .
    I am learning English so sorry for any mistakes.
    my animations: https://www.youtube.com/channel/UCbV...Y9ExjzjaIprrAA
    my addon: https://blenderartists.org/forum/sho...-or-virtualdub



  11. #231
    Member Ace Dragon's Avatar
    Join Date
    Feb 2006
    Location
    Wichita Kansas (USA)
    Posts
    28,360
    Those results from the new filtering look spectacular (so we could perhaps render images with just a quarter of the needed samples vs. half )

    It even looks to work very well with reflections and avoids the unprocessed pixels one might get with the current technique in Blender. I'm all for the implementation as the next step for denoising as long as it can be scaled up and down as well.
    Sweet Dragon dreams, lovely Dragon kisses, gorgeous Dragon hugs. How sweet would life be to romp with Dragons, teasing you with their fire and you being in their games, perhaps they can even turn you into one as well.
    Adventures in Cycles; My official sketchbook



  12. #232
    Originally Posted by chafouin View Post
    Windows Builds from 3dLuver have it: https://blenderartists.org/forum/sho...=1#post3215057
    Thanks so much, will scrambling distance be in the official releases?



  13. #233
    Member lsscpp's Avatar
    Join Date
    May 2006
    Location
    Firenze
    Posts
    2,057
    Originally Posted by 3DLuver View Post
    You have to keep in mind these are using only one sample per pixel mate, With higher samples it should work good. For example the first paper used LWR (which lukas built his system on) from the authors implementation to measure there test's against and the new technique was seen as superior.

    But these versions could also be used for quick viewport renders too as are realtime based versions.
    I'll believe you when I'll see tests with more than 1 sample then, and can take a look at what this method can do. I'm a bit skeptical because I've seen many of this "miracolous" papers, rapidly gone into oblivion due to the fact that they don't scale well with production work.
    This one for example is explicitly aimed at realtime visualization, and may be bound to a kind of bias that won't work for normal rendering. Or maybe yes. We'll see. So far the result looks poor to me, almost like a openGL version
    Everything's relative. Even saying "Everything's relative".



  14. #234
    Member Ace Dragon's Avatar
    Join Date
    Feb 2006
    Location
    Wichita Kansas (USA)
    Posts
    28,360
    Looking at it again, I am coming to an agreement with Lsscpp about the shadows. It does an amazing job with primary shadows, but secondary shadows get destroyed during the filtering process.

    Too bad really, the current Cycles denoising uses a form of NLM filtering and the new technique is clearly superior in other areas such as preserving textures and the handling of really bright areas (perhaps it could be possible to merge some of the nicer bits into the current denoising code, but I don't know).
    Last edited by Ace Dragon; 14-Jul-17 at 16:01.
    Sweet Dragon dreams, lovely Dragon kisses, gorgeous Dragon hugs. How sweet would life be to romp with Dragons, teasing you with their fire and you being in their games, perhaps they can even turn you into one as well.
    Adventures in Cycles; My official sketchbook



  15. #235
    Member
    Join Date
    Sep 2012
    Posts
    2,777
    Ran through... so if i understand correctly it's intended for GI cache?
    Last edited by burnin; 14-Jul-17 at 15:46.



  16. #236
    Member
    Join Date
    Oct 2012
    Location
    UK, England
    Posts
    542
    The Paper I was waiting for is out: http://research.nvidia.com/sites/def...f_preprint.pdf

    Very cool.



  17. #237
    Member Mobiledeveloper's Avatar
    Join Date
    Dec 2016
    Location
    Poland
    Posts
    115
    have anybody ever seen this ?

    https://youtu.be/ND96G9UZxxA?t=30s

    but this is an awesome demo how fast ray tracing can be on the mobile phone (if I understand correctly) in comparison to cycles (PowerVR 6XT GR6500 mobile GPU - Ray Tracing demos vs Nvidia Geforce GTX 980 Ti).
    I am learning English so sorry for any mistakes.
    my animations: https://www.youtube.com/channel/UCbV...Y9ExjzjaIprrAA
    my addon: https://blenderartists.org/forum/sho...-or-virtualdub



  18. #238
    Originally Posted by Mobiledeveloper View Post
    have anybody ever seen this ?

    https://youtu.be/ND96G9UZxxA?t=30s

    but this is an awesome demo how fast ray tracing can be on the mobile phone (if I understand correctly) in comparison to cycles (PowerVR 6XT GR6500 mobile GPU - Ray Tracing demos vs Nvidia Geforce GTX 980 Ti).
    Everything from Imagination is a vaporware.



  19. #239
    Hello,
    I would like to ask when it will be integrated for GPU? Thanks.
    Blender + 3DS Max, GTX 1070 + RX 480



  20. #240
    Jar091 :
    just read how your implementation works. OMG it's so simple!!!! and clever. I originally thought it's somehow comparing to neighbouring pixels, which would be plain nonesense to do only with pix color and not all the other maps like normal e.t.c. so I didn't pay attention to what you did. This however is really great.
    The main advantage of this type of algorithm is that it doesn't depend on tiling, different other maps, refractions e.t.c.
    - so it's very solid in various situations.

    Hope this gets into master soon...

    One thing about the logarithmic mapping - I always save my animations in 16bit depth to be able to do nice colorcorrections, which could bring out some noise. But I think people in production should understand this and potentially bring the image close to result allready in cycles, so this shouldn't matter so much...

    I also saw the blender projects patch thread. I see there are issues with caustics + maybe too early checks which can get a tile not to render anymore while there is still noise.
    I thought of possible solutions:
    - check for the border between tiles. If there is a clear line, re-send the problematic tile to the renderer again and mix results
    - don't allow users to go so low, enable fist check after first n samles.

    btw, greetings from Brno!
    Last edited by pildanovak; 04-Sep-17 at 02:30.



Page 12 of 13 FirstFirst ... 210111213 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •