Page 88 of 90 FirstFirst ... 38788687888990 LastLast
Results 1,741 to 1,760 of 1790
  1. #1741
    Waifu2X experiments on final images for reference
    https://blenderartists.org/forum/sho...se-experiments



  2. #1742
    Originally Posted by cpurender View Post
    You don't need perfection. It's possible to start from any point and improve gradually.

    By diverse I meant mixing up as many kinds of shapes and shaders as possible. The main problem is render time if we want high quality. That's why my only concern is power consumption, I pay 30 Cent € for each kWh.

    Final images alone don't help much, I have tested this case using Waifu2X and some others.
    You need all passes (also for experimental purposes) and the NNs won't generate the final image, they just denoise some of the passes instead. Disney denoises the diffuse and specular illumination.

    From that i make up, that its a deep multi layer network.
    Sure that can, but its not what i'm after, i'm only after denoising (not even future detecting)
    I wouldnt need to know a circle, or apple to denoise it, (or all previously released movies from Disney).

    Therefore to me final high-quality image is very..no better say extreme.. important to me, my NN wont train without it, coding it wouldnt make sense either then. Cause i'm training random tiles against that goal, i know some shaders areas are more prone to noise, but well so be it. For example classroom 30samples is overall noisy, its such noise i try to tackle, its not the BMW that is almost ready.

    To get rit of some misconception i think its never 100% realistic if you use methods like these, its about acceptable improvements.
    Just like the current builtin denoiser, cause the other option would be rendering a week for only 250 frames..

    There is though a good chance that working on individual shader passes will work as well if final image works too. And it might even be easier, but currently i'm not after such a goal.



  3. #1743
    Originally Posted by Razorblade View Post
    i'm only after denoising (not even future detecting)
    Isn't it that what we all want here?

    The final image contains too few information that's why we need the passes.
    I am confident, using the Disney solution alone would improve the classroom case too.



  4. #1744
    Yes, its just that there are different ways / methods to describe and find noise.
    There is specific info in the other layers, but if we can see it in the end result, a neural net should be able to detect it too.
    Or should at least be able to some noise level based blur for pixels alike it, with help of some pre calculated stats.



  5. #1745
    Originally Posted by Razorblade View Post
    Yes, its just that there are different ways / methods to describe and find noise.
    There is specific info in the other layers, but if we can see it in the end result, a neural net should be able to detect it too.
    Or should at least be able to some noise level based blur for pixels alike it, with help of some pre calculated stats.
    It is all about your quality requirements.
    I am pretty sure, a big enough neural network can detect even more noises than we can do, just by looking at the final image. But do you have a quantum computer with petabytes of RAM and maybe 100 years of patience to wait for the training?
    We need the passes to cut-off the complexity. More than that, we may need better (new kind of) passes to even surpass Disney's results, I have seen some interesting options in other softwares like Substance Painter.

    FYI: Talking about training time, AlphaGo Zero trained on a desktop PC with a single GTX 1080 Ti would take at least 50 years to get the same results that DeepMind has claimed.



  6. #1746
    @CpuRender, in his earlier trials RasorBlade seamed convinced a small network could do it.
    Not a behemoth network based upon google's hardware. AlphaGo Zero's neural network was trained using TensorFlow, with 64 GPU workers and 19 CPU parameter servers, not a single GTX1080 as you describe. I think RasorBlade was one of the first here to do something with neural networks here at BA. Apparently he makes a living out of AI / robotic vision, so maybe the guy deserves some credit.

    Anyway you each have different ways to get there, both of you want a few animation movies.
    CpuRender wants, EXR data of all shader layers.
    Razor only wants low res and high res png's

    Maybe its a good idea to start with the classroom scene, Rasors earlier work used it too.

    So if someone is reading this thread with pretty good hardware.
    Feel free to generate some data these guys can work on.



    Notice classroom was also rendered in eevee (maybe thats the future of rendering?)



  7. #1747
    Not a behemoth network based upon google's hardware. AlphaGo Zero's neural network was trained using TensorFlow, with 64 GPU workers and 19 CPU parameter servers, not a single GTX1080 as you describe.
    I was talking about the Reinforcement training supported by Google's TPUs.

    I think RasorBlade was one of the first here to do something with neural networks here at BA. Apparently he makes a living out of AI / robotic vision, so maybe the guy deserves some credit.
    Did I discredit him in any way?

    in his earlier trials RasorBlade seamed convinced a small network could do it....
    CpuRender wants, EXR data of all shader layers.
    Razor only wants low res and high res png's
    It's getting even more interesting to try out both to compare.

    Notice classroom was also rendered in eevee (maybe thats the future of rendering?)
    Things will change with the NNs.



  8. #1748
    Any kind of comparison with AlphaGo (Zero) in the context with a denoiser makes absolutely no sense. It uses reinforcement learning while all current denoiser approaches rely on supervised learning, which is understood a lot better and requires a lost less training.

    Originally Posted by Geographic View Post
    Notice classroom was also rendered in eevee (maybe thats the future of rendering?)
    The future of rendering is more likely to use actual path tracing, with dedicated accelerations. I expect that it is even going to be used for realtime rendering in combination with e.g. SVGF. We are still a few years away for this to work in realtime and in order to achieve high end results, other and more computationally expensive techniques are most likely needed.
    Last edited by Dantus; 24-Oct-17 at 04:39.



  9. #1749
    Originally Posted by Dantus View Post
    Any kind of comparison with AlphaGo (Zero) in the context with a denoiser makes absolutely no sense.
    Referred to training time vs quality.

    supervised learning
    You can achieve more quality from longer training. And reinforcement learning also works in this case, it's just cheaper to do it supervised at the moment.
    Last edited by cpurender; 24-Oct-17 at 05:24.



  10. #1750
    Originally Posted by cpurender View Post
    Referred to training time vs quality.
    There is nothing you can compare. If you train a simple supervised model for this amount of time, you are just wasting a tremendous amount of computational time, because it usually plateaus a lot faster and does not become better anymore. There is even the risk of overfitting. That's why a comparison does not make sense.

    Originally Posted by cpurender View Post
    You can achieve more quality from longer training. And reinforcement learning also works in this case, it's just cheaper to do it supervised at the moment.
    After a certain amount of time, you reached the maximum quality and further training does not help anymore or may even produce worse results due to overfitting.
    Of course, reinforcement learning could be used for denoising, but that's not what we are talking about! And as such, it is worth nothing to make comparisons with it.



  11. #1751
    Originally Posted by Dantus View Post
    There is nothing you can compare. If you train a simple supervised model for this amount of time, you are just wasting a tremendous amount of computational time, because it usually plateaus a lot faster and does not become better anymore. There is even the risk of overfitting. That's why a comparison does not make sense.


    After a certain amount of time, you reached the maximum quality and further training does not help anymore or may even produce worse results due to overfitting.
    Of course, reinforcement learning could be used for denoising, but that's not what we are talking about! And as such, it is worth nothing to make comparisons with it.
    I wasn't limiting the network's size or architecture. Anyway, let's wrap this up and wait for the first implementation.



  12. #1752
    Member
    Join Date
    Sep 2012
    Posts
    3,231
    NVidia recently released IRay 1.4 for Rhino with integrated real-time denoising



    Looks like a great helper. Is this feature also considered for Cycles?



  13. #1753
    their denoiser seams to improve over time thats quite nice.
    it also seams that they blur depth of field pretty fast, using distance as a factor for blur.
    if that would be combined with adaptive rendering....wow



  14. #1754
    Member lsscpp's Avatar
    Join Date
    May 2006
    Location
    Firenze
    Posts
    2,170
    Wow, it's sooo lovely that in-preview denoising!
    Everything's relative. Even saying "Everything's relative".



  15. #1755
    Because of some problems at work i had a non productive workday (a day of waiting).
    So i had time to play with my neural nets and i could spend quite some time on this.
    My coding toolbox got expanded tot he point that i now could use a animation rendered as i described earlier.
    So if someone could render a quality animation of the classroom alike the eevee demo, in .png format i be gratefully.
    My own hardware would take month's to render something like that in cycles.
    The renderings i could use to train a neural net.
    Last edited by Razorblade; 30-Oct-17 at 18:06.



  16. #1756
    Member
    Join Date
    Sep 2012
    Posts
    3,231
    Can you be more specific about:
    - what resolution, min. sampling, clamping, color mapping & how many frames are needed at least?
    - can couple of different scenes or rendered using different engines help too?

    &
    - did you get any already?



  17. #1757
    Originally Posted by burnin View Post
    NVidia recently released IRay 1.4 for Rhino with integrated real-time denoising
    Looks like a great helper. Is this feature also considered for Cycles?
    Indeed very impressive
    If you want cool new features for Blender and Cycles ( like 1.5x to 2x faster renderings ), you can donate here https://www.paypal.me/matmenu and https://www.patreon.com/matmenu



  18. #1758
    @burnin.
    It would be great to have something like the classroom scene.
    At 1280x720 or so, normal simple render settings but it depends a bit on how you render (CPU/GPU), as for GPU i wouldnt know.
    No clamping, Current denoiser disabled,
    For CPU 32x32tiles, Normal PathTraching width 100 and 300 samples, would be input material (or a GPU quality alike it).
    To train against a high sample rate 1000 or more.
    But that high setting depends on the hardware you have, for me rendering like that would take month's.
    The higher the better i think.

    For me i think 70 frames will do as minimum though 250 would allow me to do more testing in time-based denoise.
    Rather saved as png file instead of a compressed AVI file (compressing adds noise).

    The scene could be the class room, maybe later some more scenes, for validation to test the neural net against different situations.

    Well so far i dont have any movie material to work on, the code still evolves a bit slowly though; i'm the kind of programmer that tends to think long about algoryhtms and then at some point start coding, i got several types of neural nets; and i'm thinking of mixing them together with some clasic image kernel operations.



  19. #1759
    Member Ace Dragon's Avatar
    Join Date
    Feb 2006
    Location
    Wichita Kansas (USA)
    Posts
    28,988
    If the denoiser is going to improve via the neural network/machine learning route, perhaps it would be best if it made use of the existing Cycles renderpass data as a guideline system to help create the right result (or at least get very close to how it would look with tons of samples).

    That could reduce the level of training that would be needed and ameliorate the concern of overfitting (because the result would also need to fit the information in the passes).
    Sweet Dragon dreams, lovely Dragon kisses, gorgeous Dragon hugs. How sweet would life be to romp with Dragons, teasing you with their fire and you being in their games, perhaps they can even turn you into one as well.
    Adventures in Cycles; My official sketchbook



  20. #1760
    Member
    Join Date
    Sep 2012
    Posts
    3,231
    @Razorblade

    ok, have made some small changes on the classroom scene (shaders mostly, to reduce fireflies)
    Note, there's no difference in image quality at same amount of samples on CPU or GPU.

    Q: Static or animated noise seed?

    --------------------------------------------------------------------------------------------------

    @ 1000 samples, unbiased (no clamping) - variations:

    1. a) lights ON, blinds ON



    1. b) lights ON, blinds OFF



    2. a) lights OFF, blinds ON



    2. b) lights OFF, blinds OFF



    & An animation previz (145 frames @ 25fps): https://i.imgur.com/6Q4ldL2.mp4

    --------------------------------------------------------------------------------------------------

    If there's no other comments, notes,... except for the noise seed, pick a variation & i'll deliver asap.



Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •