Page 87 of 87 FirstFirst ... 3777858687
Results 1,721 to 1,738 of 1738
  1. #1721
    Member
    Join Date
    Sep 2012
    Posts
    2,684
    yup, mostly is checking, debugging, growing slowly... but that's R&D, evolution...
    use momentum to overcome inertia, should be plugged as one of EU research projects - AI computing for health, (aiding with medical simulations, RT graphics, modeling, designing prosthetics - 'simulating bone structures' & printing in similar manner) since health has highest social & common value... open source for a global win.



  2. #1722
    Originally Posted by Razorblade View Post
    @Dantus actually i did some neural denoising on cycles data, and there are some wrong assumptions about it.
    Just to be sure: Did I write wrong assumptions?



  3. #1723
    Originally Posted by Dantus View Post
    I don't believe that the electric bill is the issue.
    It is, take project AlphaGo as an example, their electric/hardware bill is huge and would be gigantic without Google's TPU accelerator.
    For language processing I refer to project DeepL, the best machine translator currently. The company is in Germany but they had to run their networks in Iceland because of the power consumption.

    The primary issue is time, because you need a lot of that to prepare the data, to train and test the models, to tweak the hyperparameters, adjust your architecture, ... . It is a lot of work!
    I think the paper from Disney (?) research team should give us enough details to skip some experiments.

    Among the examples you mentioned, the denoiser is clearly the simplest example, by far. But even that for that, there are numerous experiments that have to be made. E.g. does is nicely handle transparencies, volumetric shapes, caustics, ...?
    Can results be improved, if it is trained with different inputs? Does it still scale like that?
    Does it work in animations or is it producing flickering? How does the architecture need to be changed to handle that?
    Have a look at the paper.
    NNs can generalize very well, generating training data is only the matter of power consumption, rest is training time.
    Depends on your quality requirement, a training of big networks can take years and consume millions of USD.
    For my own demand (of 2017), I guess training on 4 GTX 1080 Ti for about a week or two should be enough for the denoiser.

    I don't see a reason why 2018 should be the ear of neural networks.
    Because of the upcoming accelerators, GPUs aren't fast enough. Even future GPUs and CPUs will have NN accelerators.



  4. #1724
    Member Ace Dragon's Avatar
    Join Date
    Feb 2006
    Location
    Wichita Kansas (USA)
    Posts
    28,252
    On the subject of training a denoising network, Disney tends to have an advantage over the BF in that area because they have a huge amount of content they can run through the system.

    In the paper, their network is based on the output of nearly 1000 shots (and that is without training for volumetric data). I doubt the BF has 1000 totally unique renders they can pull out to ensure the network even works with corner cases (users will have to step in, the challenge being that they will need to provide 100's of scenes). It would be doable, but not without a massive effort to get a trained network to commit to master.
    Sweet Dragon dreams, lovely Dragon kisses, gorgeous Dragon hugs. How sweet would life be to romp with Dragons, teasing you with their fire and you being in their games, perhaps they can even turn you into one as well.
    Adventures in Cycles; My official sketchbook



  5. #1725
    Originally Posted by Ace Dragon View Post
    I doubt the BF has 1000 totally unique renders
    Depends on their budget but the community can surely help.



  6. #1726
    @cpurender, the authors even mention that they only used it for single frames. That means, you can expect quite some flickering when you apply it to animations. That's just one of the limitations.
    Just because it is a neural network does not mean it just works!



  7. #1727
    Originally Posted by Dantus View Post
    @cpurender, the authors even mention that they only used it for single frames. That means, you can expect quite some flickering when you apply it to animations. That's just one of the limitations.
    Just because it is a neural network does not mean it just works!
    That's similar to neural transfer for video. If flickering is a problem, you need to connect the information between frames to keep the noise/render consistent. This can be achieved by having the previous frame as additional input.



  8. #1728
    Originally Posted by cpurender View Post
    That's similar to neural transfer for video. If flickering is a problem, you need to connect the information between frames to keep the noise/render consistent. This can be achieved by having the previous frame as additional input.
    The main point is that if you want to have a production ready solution, it is still a tremendous amount of work! It is still not trivial!



  9. #1729
    Originally Posted by Dantus View Post
    The main point is that if you want to have a production ready solution, it is still a tremendous amount of work! It is still not trivial!
    I can't estimate the effort at this moment.
    But since we still need training data anyway, how about start collecting high quality renders soon?
    If nobody can, I will try to manage my spare time for this on christmas holidays.
    Starting from creating a new website for people to submit final renders with all passes.



  10. #1730
    @Dantus, no not realy wrong, but its just that for this kind of noise, training itvwould not be of thousands of completed renders.
    Just some tiles, in my previous attempts i tiled an image, took random about 100 tiles or so to train the network from a single image.
    Then based upon those 100 tiles processed all remaining tiles. Sure a few images more wouldnt hurt to train it. But its not that thousands of images would be required.

    Thousands of images are required for very deep neural network to find dogs /people and tranlate it to text, or to describe a photo.
    Because its the way how those networks are build, layer by layer is trained for a specific future (ea circle, line arc, color etc etc).
    Each layer initially gives a matching % for matching-ness to their original goal. But over (hundreds of images) (because of neural feedback) some of those layer adjust their goal (ea find elipse instead of circle). Since all deep layer are connected lots of calculations are required between those layers, multiplied by the amount of required images make such networks heavy on normal hardware. (doable but heavy).

    Denoising isnt such a problem its more about optimizing a blur operation.
    a 2D NN or a temporal net for video are a form of convulsion operation where x nodes get in and a less pixels gets out optimized.
    a DNN are naturally very good in ..how to explain.. areas of equalness.. ( they can find bounderies, like weather pressure maps) and that could go hand in hand with a form of blurring where it would take the average of certain surounding pixels who mathces most but others not. (i feel kinda tempted to also once do this without a neural net..maybe later, simpler but its less interesting to me).

    The temporal (time frames), would rather be trained averaging over frames, with respect to moving object, think of the frame data of pixels stacked upon each as a voxel in this 3d matrix, find n pixels who maches most closely to average a certain pixel on a certain frame. It would be up to a neural network to pick the proper ones. Well averaging, might be oversimplifying how a neural network could do this.

    @Cpu render, having access to animation in highquality and low quality (with and without automated SEED in rendering) would surely be of help for what i have in mind, maybe others can use such data as well to train their NN, ideas.



  11. #1731
    CPUrender, would you be able to store those movies as image series ?
    So i (and others) wont get mpeg distortions or h264, preferably just png files.
    It would be fun if more people tried it, cause then we get some competition .
    Perhaps even the NN comunities will kick in , as new NN goal, to have some standard noise problem.
    As far as i'm aware there are no such test data sets for neural networks yet.
    Last edited by Razorblade; 21-Oct-17 at 18:03.



  12. #1732
    Member Ace Dragon's Avatar
    Join Date
    Feb 2006
    Location
    Wichita Kansas (USA)
    Posts
    28,252
    Originally Posted by Razorblade View Post
    @Dantus, no not realy wrong, but its just that for this kind of noise, training itvwould not be of thousands of completed renders.
    Just some tiles, in my previous attempts i tiled an image, took random about 100 tiles or so to train the network from a single image.
    Then based upon those 100 tiles processed all remaining tiles. Sure a few images more wouldnt hurt to train it. But its not that thousands of images would be required.
    One common trait about neural networks right now is that the network itself tends to be a very naive system.

    For instance, the networks knows how to successfully denoise part of an image without losing detail. Now let's make a few small changes, chances are those changes can throw the algorithm off (which results in poor-quality output). It's not a true AI in a sense in that it can't make its own assumptions for areas it can't recreate from pieces of the examples. Simply put, all the algorithms do is create an amalgamation of content it has already seen.
    Sweet Dragon dreams, lovely Dragon kisses, gorgeous Dragon hugs. How sweet would life be to romp with Dragons, teasing you with their fire and you being in their games, perhaps they can even turn you into one as well.
    Adventures in Cycles; My official sketchbook



  13. #1733
    Member Ace Dragon's Avatar
    Join Date
    Feb 2006
    Location
    Wichita Kansas (USA)
    Posts
    28,252
    Just wondering, does anyone know what Lukas has been up to lately? I ask because there's been no activity from him anywhere on the developer site and this forum for nearly a month now (the guys at Theory Animation might have an idea since they hired him to work on features they need). Does anyone know?
    Last edited by Ace Dragon; 21-Oct-17 at 20:42.
    Sweet Dragon dreams, lovely Dragon kisses, gorgeous Dragon hugs. How sweet would life be to romp with Dragons, teasing you with their fire and you being in their games, perhaps they can even turn you into one as well.
    Adventures in Cycles; My official sketchbook



  14. #1734
    Originally Posted by Razorblade View Post
    CPUrender, would you be able to store those movies as image series ?
    So i (and others) wont get mpeg distortions or h264, preferably just png files.
    It would be fun if more people tried it, cause then we get some competition .
    Perhaps even the NN comunities will kick in , as new NN goal, to have some standard noise problem.
    As far as i'm aware there are no such test data sets for neural networks yet.
    We need all passes. I only know about the OpenEXR Multilayer file format, which can contain all passes.
    My initial idea is to collect about 10k multilayer exr files, 50 MB each, 500 GB total.



  15. #1735
    Originally Posted by Ace Dragon View Post
    For instance, the networks knows how to successfully denoise part of an image without losing detail. Now let's make a few small changes, chances are those changes can throw the algorithm off (which results in poor-quality output). It's not a true AI in a sense in that it can't make its own assumptions for areas it can't recreate from pieces of the examples. Simply put, all the algorithms do is create an amalgamation of content it has already seen.
    You are wrong about the amalgamation of content, please read the paper above.
    Neural networks can generalize very well, they only need sufficient diverse training data.
    Don't forget that we are simulating physics here, there's no perfection, and there's also no better solution at the moment.



  16. #1736
    Originally Posted by Razorblade View Post
    @Dantus, no not realy wrong, but its just that for this kind of noise, training itvwould not be of thousands of completed renders.
    Just some tiles, in my previous attempts i tiled an image, took random about 100 tiles or so to train the network from a single image.
    Then based upon those 100 tiles processed all remaining tiles. Sure a few images more wouldnt hurt to train it. But its not that thousands of images would be required.

    Thousands of images are required for very deep neural network to find dogs /people and tranlate it to text, or to describe a photo.
    Because its the way how those networks are build, layer by layer is trained for a specific future (ea circle, line arc, color etc etc).
    Each layer initially gives a matching % for matching-ness to their original goal. But over (hundreds of images) (because of neural feedback) some of those layer adjust their goal (ea find elipse instead of circle). Since all deep layer are connected lots of calculations are required between those layers, multiplied by the amount of required images make such networks heavy on normal hardware. (doable but heavy).
    The main issue is to get an understanding what good training data is. In the paper they may not have chosen the right approach, but clearly run into limitations which are important for the practical usage. They were searching for noisy examples and maybe it would already be sufficient to add examples with lots of details, but little noise. However, once such limitations are found, it is necessary to get training data which can be used for those cases.
    The easiest way to find those limitations is by simply testing the network on a large, broad set of noisy images for which the clear version is known. For this, you don't need a lot of training data, but lots of examples to find those weaknesses of the neural network(s).

    Originally Posted by Razorblade View Post
    The temporal (time frames), would rather be trained averaging over frames, with respect to moving object, think of the frame data of pixels stacked upon each as a voxel in this 3d matrix, find n pixels who maches most closely to average a certain pixel on a certain frame. It would be up to a neural network to pick the proper ones. Well averaging, might be oversimplifying how a neural network could do this.
    This is clearly needed for practical purposes and it is unknown how a working solution is going to look so far. It may be as simple as described by you, but it could also be a lot more complicated. What's very clear for me is that getting training data and examples for this is going to be far more challenging.



  17. #1737
    Originally Posted by Dantus View Post
    This is clearly needed for practical purposes and it is unknown how a working solution is going to look so far. It may be as simple as described by you, but it could also be a lot more complicated. What's very clear for me is that getting training data and examples for this is going to be far more challenging.
    Well to create the training data isn't hard i mean its what people normally do with blender create rendered animations.
    But my own system isnt realy good at rendering, with low average quality it takes me a day todo 250 frames.
    For testing a temporal denoiser, my PC is not optimal for rendering;
    And i also cannt change the procesor or gpu of my laptop so, i'm kinda sticked to that quality.
    But some people here do have the hardware for it (l remind BMW test scene scores),that one takes me about 8+ minutes.

    PS the base reason i think denoise shouldnt be a deep NN is also because neural nets are good at 'domain' guesing.
    train that 2=5 3=6 5=8 then they usually get good estimates on non trained input such as 4=7
    They're also good at spotting when something is a bit above or below normal. And they're realy good in finding relations in multiple related measurements. They learn what weight more or less into the final solution.

    maybe even the temporal math would work as well on just single channel RGB data of a single scan line, oh i feel tempted to test that out, but i'm still optimizing it and at the moment (as of today the code is a non running state, as i did some code design changes); i'm building some data ordening toolset first around it (for algorythmic testing), ea building my NN toolbox. (as i had some time to code today).
    Last edited by Razorblade; 22-Oct-17 at 19:18.



  18. #1738
    Originally Posted by Dantus View Post
    The main issue is to get an understanding what good training data is.
    Just a draft and a high quality render, that would be enough.
    Perhaps a few variations ea 50 100 and 200 samples to compare against 2000? samples. (i cannot render the high sample counts).
    A few of such (short) movies in png format (separated images, so you dont get mpg compression noise into it.).
    When having a few of such movies with some situation, it can then later be extend to what people think might be difficult for a NN



Page 87 of 87 FirstFirst ... 3777858687

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •