Page 89 of 90 FirstFirst ... 397987888990 LastLast
Results 1,761 to 1,780 of 1790
  1. #1761
    @burnin 145 samples would be nice, i thought 1000 samples would give a noise free result.
    Maybe your rendering GPU and so larger tiles, and that would require more samples ?.
    From above i'd say it would be best to use 1b.
    The others are interesting too, but it might be to noisy i think to be a target goal to solve for a neural network.
    (the globe map on the wall is too random)
    Although i'm often surprised what they "can learn".

    Maybe if possible do a 145/2 = ~ 72 samples of 1b, and 72 samples of a bit improved darker version ?.
    Perhaps use light portals (not sure if its used) for the darker versions, if the current denoiser helps on the darker version then enable it on that one, keep lighter as is alike 1b.

    Oh also animate the seed value, as the neural network should not learn the static sample distribution; as i once spotted that.

    PS my plan would be to write a 2d Network might be single hidden layer still or multi, but first layer would consist of various areas from different frames
    ea from
    9x9 part of frame 4,
    7x7 part of frame 3,
    5x5 of frame 2
    3x3 of frame 1
    and let it solve center 1 pixel on frame 1, .. as a kind of funnel, let the NN choose the best pixel based upon (likely) future neighbors.
    Though i have some other plans to test as well above would probaply the first, after i've finished my GF kitchen repair / makeover )

    PS might sound as small number of pixels, but store HSL data (thats times 3) and i store some more statistical data as wel for input probaply, so it quickly counts up. maybe i will use even less pixels, i dont use Keras or so, i got my own training methods that works quite well (as i rather create networks who i can understand and improve then create huge 'beasts' of unknown magic, what seams pretty popular these days, i still prefer the old days of elegant code and smart tricks.
    Last edited by Razorblade; 09-Nov-17 at 17:56.

  2. #1762
    Join Date
    Sep 2012
    - Am working with dailies (render using GPU+CPU together).
    - Yes, Light portal is added.
    - Clamping was OFF, why so much noise is present. Clamping now ON.
    But it darkens the result. Why i was assuming raw results are preferred (physically more accurate as more energy/light is distributed). eg.


    Would it be useful if second part (darker version) is further split into 2 parts (40 frames each)? Will this do?

    1. LIGHT
    /Frames 001-080 @ 1000 samples (denoise OFF)

    2. DARK / Frames 066-145

    a) 066-105 @ 1000 samples (denoise ON - *splotches/flickering will occur in animation)

    b) 106-145 @ 2000 samples (denoise OFF - animation will be grainy, but bit less flickery)

    Motion Blur: ON or OFF?
    Last edited by burnin; 10-Nov-17 at 16:45. Reason: spelling

  3. #1763
    second part darker 2 times 40 is fine, ..
    In the end i will train the neural net like this.
    I divide the frames in images, from which i take tiles, as an array of tile[x,y,frameN] so width 40 frames it would still mean i have lot of tiles to train from.

    Maybe then keep denoise off, flicker behavior is not what i currently would be able to handle, maybe as input but not for a validation data set. (your creating a validation data set, ea the goal the neural network has to learn).

    Motion blur would be OK ( in fact i rather have it ), as it doesn't make it complex, but it is how animators usually render.
    (should be on by default in blender i think)

    Also for the camera motion, can you save the blend as well (with your motion), as i'd like to render low quality as well, basically that would be the input, while your rendering should be the output to train.

    I had not time to code, i might need 2 weekend more to rebuild the kitchen, after that i hope i will have some time.
    Or maybe i will have some luck at work sometimes i can code my neural nets there, as we use them there to (industrial/vision).

  4. #1764
    Join Date
    Sep 2012
    Hey, sorry for late reply.

    Just confirming that i'm OK with the deal & specs also, no other comments from my side.
    Will provide specified renders & the scene file. Hopefully this week or the next - this month for sure

    Since it's same here... Lack of time. Busy & hectic nowadays, since everyone's getting ready for winter festivities during Happy December & then the Final holidays.

    Will keep you posted

    bye, bye
    & wish you do good work on renovation

  5. #1765
    Does this work with Texture baking? I've notice that Blender's compositing feature doesn't, and GIMP isn't all that great for denoising.

  6. #1766
    Member theoldghost's Avatar
    Join Date
    Jun 2008
    U.S.A. The Southeast section of Virginia
    EricStoa, no not at the present time.

    • Denoising cannot be used for baking yet.
    • For animation denoising can be used, however it still requires high sample counts for good results. With low sample counts low frequency (blurry) noise can be visible in animation, even if it is not immediately apparent in still images.
    Attached Images Attached Images

  7. #1767
    probaply neural network denoise wont work either width baking since its more close to fit in compositor, but it will not be directly there, if it works well i provide concept of code to blender development, i usually write c#, but conversion can then be done to c++ (or i can compile to c# core which is platform idependant.. though i'm not sure if they would want non c++ code inside Blender, to me it takes just to much time to code in c++ its a great language but not for experiments c# fits that better

  8. #1768
    64 samples, branched, 5 for glossy(or denoise would erase the reflects)
    Sorry, the building 2017-09-11 10:43 Hash 5bd8ac9

  9. #1769
    Join Date
    Sep 2012
    @Razorblade (& everyone else who finds this valuable)

    Here's everything...

    1 x scene set: LIGHT
    1 x scene set: DARK

    format - 1280x720p, 8bpcRGBA.png per frame
    Set-1_LIGHT: 01-80 @ 1000 samples
    Set-1_LIGHTdn: 01-20 @1000 samples + denoised
    Set-2_DARK: 80-136 @ 2000 samples
    Set-3_DARKdn: 136-145 @ 2000 samples + denoised

    ... packed inside "classroom.7z" (~380MB)
    (SHA-256: BCA59A741BDF7B97A61F03EE3B12EFB65778DE0F4728A9832B 26EEB6C7D79C3C)

    If in doubt about license: continue to use CC-0 (as is used for original "Class room" scene)

    Everyone's free to...
    Last edited by burnin; 28-Nov-17 at 10:43. Reason: extra info added

  10. #1770
    Just an update (of neural net thinking)..

    Today at work I had a very closely related problem to solve.
    In statistics the problem of 'censored data'. It's about having a dataset, but width some missing samples. How to best fill the gaps ?.

    If the linear data 'vibration', 'fluctuation', is not too chaotic a neural network can be trained to estimate it. And it turns out that such a network isn't that complex either, if your into neural nets. To my surprise, such networks are in fact rather simple, a small regression network, one node in and one node out. The amount of possible twisting points depend on the number of hidden nodes.
    Thus depending on how many linear curve twists you want, add extra hidden nodes.
    Train the network width the samples you do know, and once trained ask it the unknowns.. sounds like magic, but NN's are good at this.

    So for a cycles animation, where one would like to denoise luma.
    At 25 frames p/seccond a pixel normally won't change too often, but might be heading towards a slight change. Talking about a single same pixel here but not as something seen in a 2D relation but rather its changes over time.

    But 25 samples is a bit short to train from (I think). So let's take 3 seconds. Input as pixel(x,y) becomes non dimensional luma value
    PixelLuma[framNr] is a small data set of 75 frames over time frames, for a single pixel.
    Train the NN width all knowns, but not the current frame as to estimate pixelLuma[currentframe].
    Maybe the data set can also contain already solved pixels as well (ea the past 3 frames).

    The downside of approach, train NN per pixel per frame (time counts up).
    Sudden short bright flashes (1 or 2 frames), might be removed. However that might work against fireflies as well...
    If it converges slowly it will not be removed...
    Depending on pixel changes, pixells will be more or less 'blured'.. (fun regression math is a way of blur resolving)

    Well, i had to code something like it, I will dive deeper into this later, at work the code is promising but it needs a bit more testing.

    I'm sharing the thought because its easy to test.
    And here is a regresionNN code sample as well :

    *PS, it might be (and likely) that a final image would still contain some noise, width this reduction method; as for cycles data i dont yet know how good this wil work against temporal noise. As things at my work are not identical to cycles, (i'm into computer vision)
    Last edited by Razorblade; 01-Dec-17 at 15:59.

  11. #1771

  12. #1772
    The word you are looking for is "variance"

  13. #1773
    Join Date
    Feb 2012
    can we get this? seams like all renders got in to it. Clarise.. redshift.. octane..

  14. #1774
    mmm since Cycles is MIT license, it may be possible, but Im not totally sure, but Optix is a propietary technology from Nvidia... Im not sure if implementing this could be possible... but other with better knowledge than me could confirm or deny this

  15. #1775
    BA Crew Fweeb's Avatar
    Join Date
    Sep 2003
    Atlanta, GA, USA
    Originally Posted by juang3d View Post
    mmm since Cycles is MIT license...
    Minor correction: Cycles uses the Apache 2.0 license. It's not *that* different from the MIT license, but it's still best that we're accurately saying what license is used.

  16. #1776
    Originally Posted by Kramon View Post
    can we get this? seams like all renders got in to it. Clarise.. redshift.. octane..
    Razorblade has been talking since pages about exactly that.

  17. #1777
    Thanks Fweeb, I was unsure.

    How do you think this could affect the optix implementation?
    Is it even possible?
    Wouldn’t it remove AMD from the equation?


  18. #1778
    And, let me add a question, not directly related to this Optiz thing (wich I dont like too much, since is not open and its coming from Nvidia so... not too good feelings here...)

    Are there any news on the cross-frame denoising feature? Any news regarding this?


  19. #1779
    Member theoldghost's Avatar
    Join Date
    Jun 2008
    U.S.A. The Southeast section of Virginia
    Damn good question jung3d. In one of the Blender Conference videos Lukas said while not pretty it would be done. Given his commitments I took that to mean 2.8 Any version in 2.79 before Christmas would indeed be a gift. As you might recall that is where he said even in the huge budget programs it is still somewhat of a hack in the way it's implemented. I'm approaching the animation stage of a project so you are not alone blender buddy. Cheers

  20. #1780
    Join Date
    Sep 2012
    Info from Panos Zompolas @ RedShift forums / 12 December 2017 03:20 AM
    Hello everyone!

    As most of you already know, we've been working hard to get both the NVidia OptiX denoiser (aka "NVidia AI denoiser") as well as Altus Denoiser working with Redshift. Instead of preparing two different posts (NVidia and Altus) with FAQs on the same topic of denoising, we decided to put all relevant information in a single post!

    For those of you that don't even know what this denoising is about, please check out this video we prepared that shows the NVidia denoiser in action: Denoising, in a nutshell, is a post-processing filter that will remove noise ("grain") on images containing ray traced effects like GI, depth of field, area lights, low-gloss reflections, etc.

    If you have any questions or comments, please don't hesitate to let us know below!



    OptiX FAQ

    When will it be ready?

    The denoiser is still being worked on by NVidia. We found a couple of issues and reported them to NVidia who are hard at work fixing them. Once the issues are fixed, it shouldn't take us too long before we can release a test version with it. Considering we're waiting for "final" code from NVidia, we don't have an ETA at the moment. If it doesn't happen sometime this week, it will have to be after the holiday break.

    When we're ready to release this, it will initially come as an "experimental" version. Once things stabilize a bit, the tech will be merged into our regular versions.

    Does this cost anything?

    Nope! NVidia provides this library for free. Redshift will most likely embed it in its installer.

    Does it need a special GPU, like Volta?

    We have run the denoiser here on older GPUs like Maxwell and Pascal (i.e. GTX970, GTX1070, TitanX, etc). Please note that the denoiser requires quite a bit of VRAM (especially with higher resolutions) so we recommend running it on 8GB GPUs or higher.

    How well does it work?

    Pretty well! :-) It has not been trained with Redshift data yet (NVidia trained it with 15,000 iRay images!) so there do exist cases where it incorrectly thinks the noise is scene detail and doesn't smooth it out. Also there are other cases where, if the geometry is too complicated (like hair), it doesn't know what to do because it hasn't been trained with such data yet.

    As you can probably guess, the solution to the above issues is that we'll need to train it with Redshift data! We will very likely need the help of the community for this. There will be a separate post about it.

    Even with this limitation, the denoiser works really well for preview purposes! It's really nice to be able to see noise from GI, area lights or depth of field disappear within a matter of seconds!

    How complete is the Redshift integration currently?

    We're currently working on giving the denoiser more data than just the (noisy) beauty image, like we've been doing so far. Achieving this means adding AOV support to progressive rendering which some RS users have been asking for independently of the denoiser! This is fairly close to being done so, hopefully, once we feed the NVidia denoiser with the albedo and normal AOVs, it should be able to do an even better job than today with preserving texture or normal detail.

    Will it work only for progressive or for bucket rendering too?

    Both! While we've been showing it running in progressive, we'll also allow it for bucket rendering as well!

    Will this speed all my renders 5-10 times?

    Well... yes and no! :-) If you care about draft/preview renders that don't look completely noisy and crap: yes it will! But remember that all these "deep learning" AI systems need to be trained! Unless the denoiser is trained with *many* examples of every single possible rendering scenario, there will always be the possibility for it not knowing what to do! For example, it might make the image a bit too blurry (introducing a weird soft "Monet" spiral effect) or it might incorrectly think that the noise is actual scene detail and it shouldn't touch it.

    This should be expected: denoisers have to create visually appealing images from images that don't have enough information! And they do a really good job considering the inputs. But they can't always do miracles! Also, often times, the quality of the final result depends on the quality of the input. I.e. if you give it a super-noisy scene, don't expect a perfect result!

    Altus FAQ

    Ok so if you now have the NVidia denoiser, why are you bothering with Altus?

    The NVidia denoiser is based on AI routines. While this is the new frontier for certain types of algorithms, it might not be always consistent! My personal gut-feeling is that the NVidia denoiser will be unbeatable for interactive manipulation of scenes but it might take some time before it's ready for final-frame production results. It just needs more training and the tech is too new - it needs to go through a few more iterations!

    Altus, on the other hand, is based on tried and tested denoising algorithms that are used in the industry today (like the Pixar denoiser). Plus some results we've seen from Altus have actually out-performed many denoisers you have seen integrated in competing renderers. So it made sense to help integrate it to Redshift.

    So... is this going to be free for Redshift users?

    Sadly... no. This is not an OEM/licensing deal we have with Innobright. There will be a special version/price for Redshift users (lower price, obviously). We don't have the specifics just yet but we'll keep you posted.

    Can this be used for interactive rendering?

    It's doubtful. A 1080p image takes a good few seconds to process (with some AOVs) so it's not really suitable for real-time editing.

    I heard I need to render two images with it! Doesn't that make things slower in the end?

    That might actually *not* be the case in the end! Or rather: there might be extra options that do not require this. Watch this space! :-)

    How complete is the integration today?

    We have a rudimentary system up and running but we need to do quite a bit of housekeeping and polish. For example, have an automatic way for generating the necessary AOVs for denoising without too much user input, determine good UIs for the various options, etc. We will probably release a "rough around the edges" version (along with the NVidia denoiser) as an "experimental" version and get some feedback. We'll then polish/adjust things depending on that feedback.

Page 89 of 90 FirstFirst ... 397987888990 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts