Info from Panos Zompolas @ RedShift forums / 12 December 2017 03:20 AM
Hello everyone!
As most of you already know, we’ve been working hard to get both the NVidia OptiX denoiser (aka “NVidia AI denoiser”) as well as Altus Denoiser working with Redshift. Instead of preparing two different posts (NVidia and Altus) with FAQs on the same topic of denoising, we decided to put all relevant information in a single post!
For those of you that don’t even know what this denoising is about, please check out this video we prepared that shows the NVidia denoiser in action: https://www.youtube.com/watch?v=ofcCQdIZAd8. Denoising, in a nutshell, is a post-processing filter that will remove noise (“grain”) on images containing ray traced effects like GI, depth of field, area lights, low-gloss reflections, etc.
If you have any questions or comments, please don’t hesitate to let us know below!
Thanks
-Panos
OptiX FAQ
When will it be ready?
The denoiser is still being worked on by NVidia. We found a couple of issues and reported them to NVidia who are hard at work fixing them. Once the issues are fixed, it shouldn’t take us too long before we can release a test version with it. Considering we’re waiting for “final” code from NVidia, we don’t have an ETA at the moment. If it doesn’t happen sometime this week, it will have to be after the holiday break.
When we’re ready to release this, it will initially come as an “experimental” version. Once things stabilize a bit, the tech will be merged into our regular versions.
Does this cost anything?
Nope! NVidia provides this library for free. Redshift will most likely embed it in its installer.
Does it need a special GPU, like Volta?
We have run the denoiser here on older GPUs like Maxwell and Pascal (i.e. GTX970, GTX1070, TitanX, etc). Please note that the denoiser requires quite a bit of VRAM (especially with higher resolutions) so we recommend running it on 8GB GPUs or higher.
How well does it work?
Pretty well! It has not been trained with Redshift data yet (NVidia trained it with 15,000 iRay images!) so there do exist cases where it incorrectly thinks the noise is scene detail and doesn’t smooth it out. Also there are other cases where, if the geometry is too complicated (like hair), it doesn’t know what to do because it hasn’t been trained with such data yet.
As you can probably guess, the solution to the above issues is that we’ll need to train it with Redshift data! We will very likely need the help of the community for this. There will be a separate post about it.
Even with this limitation, the denoiser works really well for preview purposes! It’s really nice to be able to see noise from GI, area lights or depth of field disappear within a matter of seconds!
How complete is the Redshift integration currently?
We’re currently working on giving the denoiser more data than just the (noisy) beauty image, like we’ve been doing so far. Achieving this means adding AOV support to progressive rendering which some RS users have been asking for independently of the denoiser! This is fairly close to being done so, hopefully, once we feed the NVidia denoiser with the albedo and normal AOVs, it should be able to do an even better job than today with preserving texture or normal detail.
Will it work only for progressive or for bucket rendering too?
Both! While we’ve been showing it running in progressive, we’ll also allow it for bucket rendering as well!
Will this speed all my renders 5-10 times?
Well… yes and no! If you care about draft/preview renders that don’t look completely noisy and crap: yes it will! But remember that all these “deep learning” AI systems need to be trained! Unless the denoiser is trained with many examples of every single possible rendering scenario, there will always be the possibility for it not knowing what to do! For example, it might make the image a bit too blurry (introducing a weird soft “Monet” spiral effect) or it might incorrectly think that the noise is actual scene detail and it shouldn’t touch it.
This should be expected: denoisers have to create visually appealing images from images that don’t have enough information! And they do a really good job considering the inputs. But they can’t always do miracles! Also, often times, the quality of the final result depends on the quality of the input. I.e. if you give it a super-noisy scene, don’t expect a perfect result!
Altus FAQ
Ok so if you now have the NVidia denoiser, why are you bothering with Altus?
The NVidia denoiser is based on AI routines. While this is the new frontier for certain types of algorithms, it might not be always consistent! My personal gut-feeling is that the NVidia denoiser will be unbeatable for interactive manipulation of scenes but it might take some time before it’s ready for final-frame production results. It just needs more training and the tech is too new - it needs to go through a few more iterations!
Altus, on the other hand, is based on tried and tested denoising algorithms that are used in the industry today (like the Pixar denoiser). Plus some results we’ve seen from Altus have actually out-performed many denoisers you have seen integrated in competing renderers. So it made sense to help integrate it to Redshift.
So… is this going to be free for Redshift users?
Sadly… no. This is not an OEM/licensing deal we have with Innobright. There will be a special version/price for Redshift users (lower price, obviously). We don’t have the specifics just yet but we’ll keep you posted.
Can this be used for interactive rendering?
It’s doubtful. A 1080p image takes a good few seconds to process (with some AOVs) so it’s not really suitable for real-time editing.
I heard I need to render two images with it! Doesn’t that make things slower in the end?
That might actually not be the case in the end! Or rather: there might be extra options that do not require this. Watch this space!
How complete is the integration today?
We have a rudimentary system up and running but we need to do quite a bit of housekeeping and polish. For example, have an automatic way for generating the necessary AOVs for denoising without too much user input, determine good UIs for the various options, etc. We will probably release a “rough around the edges” version (along with the NVidia denoiser) as an “experimental” version and get some feedback. We’ll then polish/adjust things depending on that feedback.