Experimental 2.77 Cycles Denoising build

Hey, sorry for late reply.

Just confirming that i’m OK with the deal & specs also, no other comments from my side.
Will provide specified renders & the scene file. Hopefully this week or the next - this month for sure :slight_smile:

Since it’s same here… Lack of time. Busy & hectic nowadays, since everyone’s getting ready for winter festivities during Happy December & then the Final holidays.

Will keep you posted :wink:

bye, bye
& wish you do good work on renovation

Does this work with Texture baking? I’ve notice that Blender’s compositing feature doesn’t, and GIMP isn’t all that great for denoising.

EricStoa, no not at the present time.

  • Denoising cannot be used for baking yet.
  • For animation denoising can be used, however it still requires high sample counts for good results. With low sample counts low frequency (blurry) noise can be visible in animation, even if it is not immediately apparent in still images.

Attachments


probaply neural network denoise wont work either width baking since its more close to fit in compositor, but it will not be directly there, if it works well i provide concept of code to blender development, i usually write c#, but conversion can then be done to c++ (or i can compile to c# core which is platform idependant… though i’m not sure if they would want non c++ code inside Blender, to me it takes just to much time to code in c++ its a great language but not for experiments c# fits that better

64 samples, branched, 5 for glossy(or denoise would erase the reflects)



Sorry, the building 2017-09-11 10:43 Hash 5bd8ac9

@Razorblade (& everyone else who finds this valuable)

Here’s everything…
preview: [video]https://i.imgur.com/5IfQGi5.mp4[/video]

1 x scene set: LIGHT
1 x scene set: DARK

RECAPTA/OUTPUT
format - 1280x720p, 8bpcRGBA.png per frame
Set-1_LIGHT: 01-80 @ 1000 samples
Set-1_LIGHTdn: 01-20 @1000 samples + denoised
Set-2_DARK: 80-136 @ 2000 samples
Set-3_DARKdn: 136-145 @ 2000 samples + denoised

… packed inside “classroom.7z” (~380MB)


(SHA-256: BCA59A741BDF7B97A61F03EE3B12EFB65778DE0F4728A9832B 26EEB6C7D79C3C)

PS/ADDENDUM:
If in doubt about license: continue to use CC-0 (as is used for original “Class room” scene)

Everyone’s free to…

Just an update (of neural net thinking)…

Today at work I had a very closely related problem to solve.
In statistics the problem of ‘censored data’. It’s about having a dataset, but width some missing samples. How to best fill the gaps ?.

If the linear data ‘vibration’, ‘fluctuation’, is not too chaotic a neural network can be trained to estimate it. And it turns out that such a network isn’t that complex either, if your into neural nets. To my surprise, such networks are in fact rather simple, a small regression network, one node in and one node out. The amount of possible twisting points depend on the number of hidden nodes.
Thus depending on how many linear curve twists you want, add extra hidden nodes.
Train the network width the samples you do know, and once trained ask it the unknowns… sounds like magic, but NN’s are good at this.

So for a cycles animation, where one would like to denoise luma.
At 25 frames p/seccond a pixel normally won’t change too often, but might be heading towards a slight change. Talking about a single same pixel here but not as something seen in a 2D relation but rather its changes over time.

But 25 samples is a bit short to train from (I think). So let’s take 3 seconds. Input as pixel(x,y) becomes non dimensional luma value
PixelLuma[framNr] is a small data set of 75 frames over time frames, for a single pixel.
Train the NN width all knowns, but not the current frame as to estimate pixelLuma[currentframe].
Maybe the data set can also contain already solved pixels as well (ea the past 3 frames).

The downside of approach, train NN per pixel per frame (time counts up).
Sudden short bright flashes (1 or 2 frames), might be removed. However that might work against fireflies as well…
If it converges slowly it will not be removed…
Depending on pixel changes, pixells will be more or less ‘blured’… (fun regression math is a way of blur resolving)

Well, i had to code something like it, I will dive deeper into this later, at work the code is promising but it needs a bit more testing.

I’m sharing the thought because its easy to test.
And here is a regresionNN code sample as well : https://msdn.microsoft.com/en-us/magazine/mt826350

*PS, it might be (and likely) that a final image would still contain some noise, width this reduction method; as for cycles data i dont yet know how good this wil work against temporal noise. As things at my work are not identical to cycles, (i’m into computer vision) :wink:

Texture baking denoising:https://blenderartists.org/forum/showthread.php?439254-Bake-Manager-With-Denoiser-Integration-WIP

The word you are looking for is “variance”

can we get this? seams like all renders got in to it. Clarise… redshift… octane…

mmm since Cycles is MIT license, it may be possible, but I´m not totally sure, but Optix is a propietary technology from Nvidia… I´m not sure if implementing this could be possible… but other with better knowledge than me could confirm or deny this :slight_smile:

Minor correction: Cycles uses the Apache 2.0 license. It’s not that different from the MIT license, but it’s still best that we’re accurately saying what license is used.

Razorblade has been talking since pages about exactly that.

Thanks Fweeb, I was unsure.

How do you think this could affect the optix implementation?
Is it even possible?
Wouldn’t it remove AMD from the equation?

Cheers

And, let me add a question, not directly related to this Optiz thing (wich I don´t like too much, since is not open and it´s coming from Nvidia so… not too good feelings here…)

Are there any news on the cross-frame denoising feature? Any news regarding this?

Cheers.

Damn good question jung3d. In one of the Blender Conference videos Lukas said while not pretty it would be done. Given his commitments I took that to mean 2.8 Any version in 2.79 before Christmas would indeed be a gift. As you might recall that is where he said even in the huge budget programs it is still somewhat of a hack in the way it’s implemented. I’m approaching the animation stage of a project so you are not alone blender buddy. Cheers

Info from Panos Zompolas @ RedShift forums / 12 December 2017 03:20 AM

Hello everyone!

As most of you already know, we’ve been working hard to get both the NVidia OptiX denoiser (aka “NVidia AI denoiser”) as well as Altus Denoiser working with Redshift. Instead of preparing two different posts (NVidia and Altus) with FAQs on the same topic of denoising, we decided to put all relevant information in a single post!

For those of you that don’t even know what this denoising is about, please check out this video we prepared that shows the NVidia denoiser in action: https://www.youtube.com/watch?v=ofcCQdIZAd8. Denoising, in a nutshell, is a post-processing filter that will remove noise (“grain”) on images containing ray traced effects like GI, depth of field, area lights, low-gloss reflections, etc.

If you have any questions or comments, please don’t hesitate to let us know below!

Thanks

-Panos

OptiX FAQ When will it be ready?

The denoiser is still being worked on by NVidia. We found a couple of issues and reported them to NVidia who are hard at work fixing them. Once the issues are fixed, it shouldn’t take us too long before we can release a test version with it. Considering we’re waiting for “final” code from NVidia, we don’t have an ETA at the moment. If it doesn’t happen sometime this week, it will have to be after the holiday break.

When we’re ready to release this, it will initially come as an “experimental” version. Once things stabilize a bit, the tech will be merged into our regular versions.

Does this cost anything?

Nope! NVidia provides this library for free. Redshift will most likely embed it in its installer.

Does it need a special GPU, like Volta?

We have run the denoiser here on older GPUs like Maxwell and Pascal (i.e. GTX970, GTX1070, TitanX, etc). Please note that the denoiser requires quite a bit of VRAM (especially with higher resolutions) so we recommend running it on 8GB GPUs or higher.

How well does it work?

Pretty well! :slight_smile: It has not been trained with Redshift data yet (NVidia trained it with 15,000 iRay images!) so there do exist cases where it incorrectly thinks the noise is scene detail and doesn’t smooth it out. Also there are other cases where, if the geometry is too complicated (like hair), it doesn’t know what to do because it hasn’t been trained with such data yet.

As you can probably guess, the solution to the above issues is that we’ll need to train it with Redshift data! We will very likely need the help of the community for this. There will be a separate post about it.

Even with this limitation, the denoiser works really well for preview purposes! It’s really nice to be able to see noise from GI, area lights or depth of field disappear within a matter of seconds!

How complete is the Redshift integration currently?

We’re currently working on giving the denoiser more data than just the (noisy) beauty image, like we’ve been doing so far. Achieving this means adding AOV support to progressive rendering which some RS users have been asking for independently of the denoiser! This is fairly close to being done so, hopefully, once we feed the NVidia denoiser with the albedo and normal AOVs, it should be able to do an even better job than today with preserving texture or normal detail.

Will it work only for progressive or for bucket rendering too?

Both! While we’ve been showing it running in progressive, we’ll also allow it for bucket rendering as well!

Will this speed all my renders 5-10 times?

Well… yes and no! :slight_smile: If you care about draft/preview renders that don’t look completely noisy and crap: yes it will! But remember that all these “deep learning” AI systems need to be trained! Unless the denoiser is trained with many examples of every single possible rendering scenario, there will always be the possibility for it not knowing what to do! For example, it might make the image a bit too blurry (introducing a weird soft “Monet” spiral effect) or it might incorrectly think that the noise is actual scene detail and it shouldn’t touch it.

This should be expected: denoisers have to create visually appealing images from images that don’t have enough information! And they do a really good job considering the inputs. But they can’t always do miracles! Also, often times, the quality of the final result depends on the quality of the input. I.e. if you give it a super-noisy scene, don’t expect a perfect result!

Altus FAQ

Ok so if you now have the NVidia denoiser, why are you bothering with Altus?

The NVidia denoiser is based on AI routines. While this is the new frontier for certain types of algorithms, it might not be always consistent! My personal gut-feeling is that the NVidia denoiser will be unbeatable for interactive manipulation of scenes but it might take some time before it’s ready for final-frame production results. It just needs more training and the tech is too new - it needs to go through a few more iterations!

Altus, on the other hand, is based on tried and tested denoising algorithms that are used in the industry today (like the Pixar denoiser). Plus some results we’ve seen from Altus have actually out-performed many denoisers you have seen integrated in competing renderers. So it made sense to help integrate it to Redshift.

So… is this going to be free for Redshift users?

Sadly… no. This is not an OEM/licensing deal we have with Innobright. There will be a special version/price for Redshift users (lower price, obviously). We don’t have the specifics just yet but we’ll keep you posted.

Can this be used for interactive rendering?

It’s doubtful. A 1080p image takes a good few seconds to process (with some AOVs) so it’s not really suitable for real-time editing.

I heard I need to render two images with it! Doesn’t that make things slower in the end?

That might actually not be the case in the end! Or rather: there might be extra options that do not require this. Watch this space! :slight_smile:

How complete is the integration today?

We have a rudimentary system up and running but we need to do quite a bit of housekeeping and polish. For example, have an automatic way for generating the necessary AOVs for denoising without too much user input, determine good UIs for the various options, etc. We will probably release a “rough around the edges” version (along with the NVidia denoiser) as an “experimental” version and get some feedback. We’ll then polish/adjust things depending on that feedback.

And for those of us primarily using the CPU. Decent CPU bottom feeding GPU I guess Lukas is still the man.

Why is this Redshift/Optix information relevant here?

It seems optix is free, but can it be included Cycles?

Also, again, if it’s included in Cycles, how this would affect AMD gpu’s?

Nvidia is doing all this free because it benefits them from a business standpoint, people would always choose Nvidia over AMD if they keep pushing CUDA so hard, on the other hand, if this is tensor flow and any car can compute it, obviously the new Volta cards will have some advantage with their Tensor coed, but since Tensor Flow is an open source technology I assume AMD could integrate Tensor oriented hardware in their cards, so maybe Optix could be used too…

In the end the only thing I know is that I know nothing about all this, so if someone knows something more… please chime in!

cheers.

  • even if it’s given for free, the license gives no one a free choice of what & how to do it - be aware of that

  • CUDA doesn’t work on AMD cards

  • Information is relevant to prevent similar ignorance, speculations, hype…
    tho by the looks of it, many just fly through, get more confused and continue on going all “Suzzanne” (Ape) whenever they read NVidia & AI denoiser… ah well

  • maybe it’s about the computing system as a whole & further complexity involved for the benefit of all kind
    or maybe our greed, over secured protection & competitiveness must lead to a war & destroy the diverse & free world that’s left