Page 86 of 89 FirstFirst ... 36768485868788 ... LastLast
Results 1,701 to 1,720 of 1764
  1. #1701
    Member Ace Dragon's Avatar
    Join Date
    Feb 2006
    Location
    Wichita Kansas (USA)
    Posts
    28,344
    Originally Posted by Razorblade View Post
    Then those details are i think to close to real noise (ea at some point it becomes a matrix x% object y% transparent.).. even with advanced gimp denoise plugins this would becomme problematic.

    I think you made a great looking file there, but in this case denoising it will always get you some artifects about details.
    Therefor denoising this one isnt practical, for lots of other scenes it can work but not this one, its much alike problems with denoising hair, but this one might even be more complex then hair.
    Those types of details should be less of a problem for the Cycles denoiser than the GIMP plugins, as they should show up as details in the various maps that are generated for its use.

    You can improve their detection by lowering the feature weight (not neighbor weight), you might want to use a low value.
    Sweet Dragon dreams, lovely Dragon kisses, gorgeous Dragon hugs. How sweet would life be to romp with Dragons, teasing you with their fire and you being in their games, perhaps they can even turn you into one as well.
    Adventures in Cycles; My official sketchbook



  2. #1702
    Member
    Join Date
    Oct 2012
    Location
    UK, England
    Posts
    542
    New Experimental build, Lukas has done some great work for networking and cycles (but still has issues, Which is obvious after creating such a great patch that even works with AWS)

    Not tested yet but should work in theory, Details of how to use network rendering is at this page: https://developer.blender.org/D2808

    Think when i did this build it has milan adaptive sample, Micro jitter scramble, and some other things cant remember. Ignore the adaptive contrast and id's settings as is another build not included with this,but just copied the .py files over so i dont have to keep editing them (weird, cant edit py files in VS at all,just corrupts them with wrong spacing everytime ? so i just edit py files in Blender text tool, But gets annoying so i try to avoid it)

    Build from Master today with latest .diffs from above: https://mega.nz/#!Us5Uja4C!qQcBY5eID...dMXhKBGxTXkmck

    People with local network machines try this out, If this works even OK then it's a big step forward.



  3. #1703
    Member theoldghost's Avatar
    Join Date
    Jun 2008
    Location
    U.S.A. The Southeast section of Virginia
    Posts
    1,690
    Lucas you and your denoiser are the greatest thing since sliced bread for animating on a home computer. And, I'm speaking of a interior lit by area lamps with glass and gloss. Anyone who has ever attempted that knows noise was simply unavoidable. Not everyone is prepared to do the Pixar two days for a frame routine. I always drew the line at ten minutes per frame for some shots and still had to accept a certain amount of noise. That with a Intel i7 6th Gen.recently.

    After running a 250 frame test using the default Motion Blur and 300 hundred passes I did not have any flickering. I will however use 400 passes for the final renders given the Denoiser as it is now. Oh and I might add I was using the default settings on the Denoising. Noting as Ace has mentioned it had a hard time picking up small details. But, that aside this IMHO is a game changer for hobbyist who animate. And, no doubt for small studios doing archviz walk- throughs. I hope you keep attempting to improve this guy. 2.79 rules because of this feature.

    ///



  4. #1704
    Originally Posted by 3DLuver View Post
    New Experimental build, Lukas has done some great work for networking and cycles (but still has issues, Which is obvious after creating such a great patch that even works with AWS)

    Not tested yet but should work in theory, Details of how to use network rendering is at this page: https://developer.blender.org/D2808

    Think when i did this build it has milan adaptive sample, Micro jitter scramble, and some other things cant remember. Ignore the adaptive contrast and id's settings as is another build not included with this,but just copied the .py files over so i dont have to keep editing them (weird, cant edit py files in VS at all,just corrupts them with wrong spacing everytime ? so i just edit py files in Blender text tool, But gets annoying so i try to avoid it)

    Build from Master today with latest .diffs from above: https://mega.nz/#!Us5Uja4C!qQcBY5eID...dMXhKBGxTXkmck

    People with local network machines try this out, If this works even OK then it's a big step forward.
    Many thanks 3DLuver,

    great Build.
    It's around 8% faster than my old one.



  5. #1705
    Is there any news on the support for the denoiser with texture baking? This would be really awesome and useful!



  6. #1706
    Member Yura Zenkovsky's Avatar
    Join Date
    Jun 2015
    Location
    Belarus
    Posts
    11
    Originally Posted by 3DLuver View Post
    Build from Master today with latest .diffs from above: https://mega.nz/#!Us5Uja4C!qQcBY5eID...dMXhKBGxTXkmck
    The file you are trying to download is no longer available.
    -
    Hi man, thanks for your work! Could you please check the link?



  7. #1707
    Member Odilkhan Yakubov's Avatar
    Join Date
    Oct 2012
    Location
    Tashkent, Uzbekistan
    Posts
    352
    1, yeah, I agree with you! Would be awesome!



  8. #1708
    Is there any plan to support denoising on multilayered EXR like Renderman does ?
    https://chameleonscales.wordpress.com
    As a wise man once said : "The Kangaroo always repaints his screwdriver when the chameleons play the piano." Live by these words, only then will you find true happiness.



  9. #1709
    Not entirely sure about it, but made a observation of a rendered animation with denoise.
    I didnt animate seed settings, and with 100 samples the result was reasonable for non glossy materials.
    For the glossy materials i noted that it looked like well as if my camera CCD chip was not calibrated.
    A certain patern kept static over the movie.
    That made me wonder if such light/dark pattern, could simply be substracted. ?



  10. #1710
    Originally Posted by ChameleonScales View Post
    Is there any plan to support denoising on multilayered EXR like Renderman does ?
    Yes. This would be a huge feature for Cycles because it can reduce render time by a large margin.
    On the other hand it requires a lot of precomputation on the devs side if they use the latest techniques, which are only beneficial to GPUs and other accelerators but not CPUs.



  11. #1711
    Originally Posted by cpurender View Post
    Yes. This would be a huge feature for Cycles because it can reduce render time by a large margin.
    On the other hand it requires a lot of precomputation on the devs side if they use the latest techniques, which are only beneficial to GPUs and other accelerators but not CPUs.
    The only reason it does that is because the denoiser is not integrated into Renderman itself, so it can't run while the image is being rendered.

    If you enable experimental mode in Cycles you can have the render save the data passes the denoiser is using and you'll see that it's more or less the same kind of info that Renderman uses.



  12. #1712
    Hello,
    I would like to use this denoising tool of blender on an already rendered image, considering it will be rendered using blender. (the reason being I'd like to use the resumable render via blender's command-line and once I think my image is precise enough use the denoiser on it)
    The first versions of the denoiser was having such abilities, with the option "Keep denoising data" (https://wiki.blender.org/index.php/U..._Documentation)
    It seams Luckas Stockner simplified it afterward to make the integration to the main branch easier. Unfortunately, I can't find any build of that version.
    Would anybody have one, preferably on windows? (I think it would take me a long time to build it myself)
    (I also asked this on stakexchange: https://blender.stackexchange.com/qu...rendered-image)



  13. #1713
    Originally Posted by jdent02 View Post
    The only reason it does that is because the denoiser is not integrated into Renderman itself, so it can't run while the image is being rendered.

    If you enable experimental mode in Cycles you can have the render save the data passes the denoiser is using and you'll see that it's more or less the same kind of info that Renderman uses.
    I meant this one: http://cvc.ucsb.edu/graphics/Papers/...PCN_LowRes.pdf

    I bet it's easier to implement than the current denoiser of Cycles and gives much better results. It just requires a lot of precomputation for the developers.



  14. #1714
    Member
    Join Date
    Sep 2012
    Posts
    2,772
    IIRC, lukasstockner97 explained (somewhere in this thread) that this is the first stage - first iteration available in official Blender release otherwise there would be none

    @vida_vida
    experimental-build-blender-2.78-90ba99d-win64-vc14.zip (107.97MB @ sendspace)



  15. #1715
    Originally Posted by cpurender View Post
    I meant this one: http://cvc.ucsb.edu/graphics/Papers/...PCN_LowRes.pdf

    I bet it's easier to implement than the current denoiser of Cycles and gives much better results. It just requires a lot of precomputation for the developers.
    Implementing a neural network denoiser is indeed orders of magnitude easier. If you use a framework like Tensorflow, the implementation is a piece of cake for someone with a little bit experience. It would require that Tensorflow was shipped with Cycles and Blender in order to work properly though.
    The question now is, why are they not using a neural network. And the answer is that you need quite some data for the training, validation and testing. For the training, you need to find good hyperparameters, meaning you need to train several times, but with slightly different parameters. This takes a huge amount of time and computation power as well as a lot of patience. If something goes wrong in the final neural network, you need to take a completely different approach to find the causes of the issue. Even for experienced coders, it takes quite some time to getting used to this new kind of process.

    Neural networks are a cool thing, but getting used to them is not trivial and creating a production ready solution is a huge project.



  16. #1716
    Member Marc Driftmeyer's Avatar
    Join Date
    Sep 2014
    Location
    Pacific Northwest WA
    Posts
    45
    Originally Posted by Dantus View Post
    Implementing a neural network denoiser is indeed orders of magnitude easier. If you use a framework like Tensorflow, the implementation is a piece of cake for someone with a little bit experience. It would require that Tensorflow was shipped with Cycles and Blender in order to work properly though.
    The question now is, why are they not using a neural network. And the answer is that you need quite some data for the training, validation and testing. For the training, you need to find good hyperparameters, meaning you need to train several times, but with slightly different parameters. This takes a huge amount of time and computation power as well as a lot of patience. If something goes wrong in the final neural network, you need to take a completely different approach to find the causes of the issue. Even for experienced coders, it takes quite some time to getting used to this new kind of process.

    Neural networks are a cool thing, but getting used to them is not trivial and creating a production ready solution is a huge project.
    Who cares if its shipped with Blender. In a world where ens of terabytes and not hundreds of gigabytes are soon the defacto norm, no one should blink at adding those libraries to the package.

    Oh and redesigning Cycles and more of the subsystems with AI in mind will eventually need to take place. Bite the bullet now, or drown in bullets later.
    Last edited by Marc Driftmeyer; 20-Oct-17 at 17:06.



  17. #1717
    Originally Posted by Dantus View Post
    Neural networks are a cool thing, but getting used to them is not trivial and creating a production ready solution is a huge project.
    If you are new to this and want to create your own solution, it surely will take some time, but if you just copy the setup from the paper, it could have been done yesterday. Data preparation and training will add up to the electric bills of the developers.

    I am also new to this field and have just finished some initial tutorials. I can guarantee that 2018 will be the year of neural networks, would be cool if Blender can catch up early. No matter if it's denoising, rendering, compositing or even modeling, you can boost up* everything with them.
    Right now my Xeons are faster than any single GPU I can buy, but by 2019 we will have new accelerators* and rendering on CPU is going to be history.

    * By boosting up the modeling process I meant features like more intelligent Boolean modifier, general workflow, and so on.
    * Those accelerators already have their first appearances in some latest mobile phones (Google Pixel 2/XL, Huawei Mate 10 Pro) and will come to desktop very soon.
    Last edited by cpurender; 20-Oct-17 at 17:39.



  18. #1718
    Originally Posted by Marc Driftmeyer View Post
    Who cares if its shipped with Blender. In a world where ens of terabytes and not hundreds of gigabytes are soon the defacto norm, no one should blink at adding those libraries to the package.

    Oh and redesigning Cycles and more of the subsystems with AI in mind will eventually need to take place. Bite the bullet now, or drown in bullets later.
    The reason why I mentioned it is not the size, but the necessity to maintain it and to find a way to embed it properly. Also, the developers would need to find a way to share all the data, such that the training could be easily replicated and is properly understood.


    Originally Posted by cpurender View Post
    If you are new to this and want to create your own solution, it surely will take some time, but if you just copy the setup from the paper, it could have been done yesterday. Data preparation and training will add up to the electric bills of the developers.

    I am also new to this field and have just finished some initial tutorials. I can guarantee that 2018 will be the year of neural networks, would be cool if Blender can catch up early. No matter if it's denoising, rendering, compositing or even modeling, you can boost up* everything with them.
    Right now my Xeons are faster than any single GPU I can buy, but by 2019 we will have new accelerators* and rendering on CPU is going to be history.

    * By boosting up the modeling process I meant features like more intelligent Boolean modifier, general workflow, and so on.
    * Those accelerators already have their first appearances in some latest mobile phones (Google Pixel 2/XL, Huawei Mate 10 Pro) and will come to desktop very soon.
    I don't believe that the electric bill is the issue. The primary issue is time, because you need a lot of that to prepare the data, to train and test the models, to tweak the hyperparameters, adjust your architecture, ... . It is a lot of work!
    Among the examples you mentioned, the denoiser is clearly the simplest example, by far. But even that for that, there are numerous experiments that have to be made. E.g. does is nicely handle transparencies, volumetric shapes, caustics, ...? Can results be improved, if it is trained with different inputs? Does it still scale like that?
    Does it work in animations or is it producing flickering? How does the architecture need to be changed to handle that?

    I don't see a reason why 2018 should be the ear of neural networks. They are already used in a huge amount of applications we are using on a daily basis, so they clearly already had their breakthrough. This does not mean they just work out of the box. It is still a tremendous amount of work to create production ready neural networks.



  19. #1719
    @Dantus actually i did some neural denoising on cycles data, and there are some wrong assumptions about it.

    - first training a neural network is about repeating, ea having a tile of a bad render and a tile of a good render.
    - thinking in RGB is wrong, most of the noise is in lumia, so better use HSL to get general noise.
    (because noise origin in blender = unequal bounces to camera, and 'random' strong light source refractions causes mostly affects lumia).
    - try to make it easier for the neural net, don't do the raw approach like google alpha go zero (its a milestone sure, but not required)..
    - even a few images offer a lot of neural network training data
    - with around 1000 epochs (trainings over a dataset) a reasonable network gets around 98% correct scores.
    - good networks that ar up to their tasks (ea good designs) require far less computation time
    (some of my other networks usually get at 99.6% scores under 5 minutes, on a cpu ) (i dont have cuda hardware)

    - smaller networks would learn where to blur without knowing the 3d environment (VSE)
    they are also more easy/faster to train and to make
    - huge network might have more input data (for eample face-normal data / glosiness data).
    - indeed training takes time, but you can automate that as well with code (ea genetic alike code to find best paramaters).

    A common idea is that neural networks should be huge and require lots of layers to do something,.. but this is not such a problem.
    ea one doesnt want to know its a "Dog" on the picture, for such networks one needs indeed multi layer networks.
    The working of those layers is very different as to what would be required here
    The task of blending/sharpening is easier. As besides the raw data a normally blurred data tile can also already be given, simply train the network to find the best pixel ballans, based upon all available data and some precalculated data (blend/soft/blur/other tile data) feeding it such data precalculated would not require such math to be regenerated inside a deep multilayer model.

    Earlier then a few month's ago i was under the assumption that a 2D network was not realy required.
    Still but thats theoretically it should be possible but after my experiences i think that 2D networks will learn this trick faster.
    Since it not really about future detection but noise elimination, such a network wouldnt require lots of hidden layers.
    For training it still does require a little bit of horsepower, but after day or a week one should get good results for most pictures.
    (not much weirder then me rendering something for 4 days).

    In my free time (which i dont have a lot), i still code neural networks, i admit that coding for Blender got a bit forgotten (sorry for that).
    Currently i'm working on a promising neural net that could be usefull in trading and predicting, all kinds of time based data.
    But it might as well be workable for denoising film (animation) data, as this NN essentially works on patterns over time.
    Though this new network is not 2D(as a tile) its linear, but there is nothing against to train it with a lowres film and a highres film, to validate against (ea take raw lowres pixel from frame 1,2,3,4,5,6,7... use that to train against same pixel on frame 4 from a high res version of the same movie). (for me rendering a low ress movie takes lots of time allready, i've never rendered highres)

    Well first i rewrite the core of this new (allrady working) NN net, maybe (could be a few month's) i do a call here for a highQ and lowQ animation. But i wouldn't be surprised that a next update to the denoiser code would be a lot better as it currently ignores past finished frames who also contain data. And likely i think we will see that earlier then me finishing my NN.
    Despite that i like to code NN, real life demands most of my time; and at work i cannot always combine it either, eventually i will make such a network (kinda sure) just for the fun of coding and the deeper understanding of the smallersized neural nets.
    Last edited by Razorblade; 20-Oct-17 at 18:36.



  20. #1720
    oh and fun fact, neural networks are old, they orginated around the end of WW2, possibly during the war at the German side.
    it took about 70 years to explode into how we see it today (playing the game of GO), quite fascinating.
    never heard of it ?, follow this guy : https://www.youtube.com/watch?v=h3l4qz76JhQ



Page 86 of 89 FirstFirst ... 36768485868788 ... LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •