DeepDenoiser for Cycles

I’d pay to have it implemented into Blender. I think I’m more excited about this than I am about EEVEE, and that’s saying something.

3 Likes

DeepBlender

I can’t make any promises.

question is not about promises but in general how you feel - WHEN?

Sorry if this sounds silly, but does the training has to be done for each sample count? I mean, could a model trained from a 16 samples render be used to denoise a 32 samples render?

By the way, I’ve just gone speechless after the last comparison! And it answered many of my doubts about how the algo would handle high frequency details with low sample counts. I’m truly impressed!

The results are so natural, even while not totally accurated, that just an AI algorithm could produce that magic. I don’t think the Cycles denoiser has nothing to be ashamed here, they are different approaches with different limitations.

I’m really happy this era has come to Blender and I really believe this project deserves all the support from Blender Foundation and all the Blender community attention.

Please send it as a commit to: https://developer.blender.org/
I´m sure it can be implemented on 2.8
Q: Does the user still has to work this on the script window with the text?
or is it just more automatic process now?
Thanks.

1 Like

Well this looks amazing! It might have been already said there but man 205 posts…:slight_smile: so how and when is this gonna be implemented in Blender ?:slight_smile: (I wouldn’t mind paying for this).

Wow, that’s unexpected. Thanks a lot!

Getting a prototype implementation which works on my computer will take at least 4 days. Making that ready for everyone will take at least 4-8 days. Those days are full time with focus on that topic. As I can obviously not work full time on this project, those numbers are pretty useless. I will definitely take the time to do it, but I simply don’t know when this is going to be.

That’s not a silly question at all. For me, it is one of the most critical questions! All the examples I have posted are based on a model which was trained with 16 and 32 samples only. When I stopped the training, it was still improving, even though extremely slowly.
As you can see in the previous examples, it does a pretty decent job with 128 samples, even though it was never trained with closely this amount of samples.

Thanks a lot!

It is still pretty far away from being ready to be submitted. Right now, many annoying steps inside and outside of Blender are required to get the denoised images. It has to be a very simple and friendly process for users before a commit can be made.

Thanks for the kind words. Your question should also be answered within this post, I hope :slight_smile:

3 Likes

Absolutely amazing work. AI software is going to push 3D technology further and faster than hardware technology in the coming years imo.

1 Like

The comparisons look amazing, great job so far.

You have mentioned before that passes can be used to help guide the denoiser, have you ever thought about giving it access to the denoising passes developed by Lukas? The reason I bring it up is because the information they carry is far more detailed and high quality than what you find with the regular compositing passes (the generic normal pass does not support bumps plugged into the shader node normal socket for instance).

1 Like

That’s indeed something I looked into. When I generated the data, I simply took all passes that I could access. At that point, the denoising passes were only available if the denoiser was actually used. As far as I know, this limitation does not anymore exist in at least some development versions.
I was not aware that the bumps are not represented in the normal passes! Thanks for that information.

Right now, I am focusing on getting a complete development pipeline. The actual integration into Blender is missing, as well as a C API for the denoiser. Once that is in place, I am going to make another iteration through every step of the development pipeline again and at the very beginning, there is going to be generating the data for all the passes. I will definitely look very closely into the denoiser passes.

2 Likes

This looks amazing. How will this work with animation?.., can you please show an example.

Thanks.

Right now, it is for single frames only. However, there is a logical way to make it work for animations too, which is not yet implemented.

If you want to see how this might look, check out the video on this page (jump to 0:52):
http://drz.disneyresearch.com/~jnovak/publications/KPAL/index.html

2 Likes

Thanks. I personally interested in rendering animation. I will be waiting for the implementation of this feature in DeepDenoiser. The video that you linked looks very interesting.

Thanks for all your work.

2 Likes

Are you part of this project?

https://blender.community/c/today/T5cbbc

No, I am just a beta tester. I am trying a little bit coding wise, but I don’t know yet how useful it is going to be. It is a really good reference for the DeepDenoiser.

1 Like

So, is it different enough for your project to be still as important?

The DeepDenoiser gets most of my attention as I want to make improvements with it and deploy it as soon as possible. The OptiX denoiser integration was a great opportunity to check the results of another denoiser. As of now, it does not perform as well as it could, because it would require albedo and screen space normals as auxiliary input. I have some code to create the screen space normals and that’s what I am trying to help out with. As OptiX and the DeepDenoiser are relatively close from a technical perspective, comparing them directly might give me some insights I have missed so far.

5 Likes

Please just make it work with 2.8, there’s no optix or other ai denoiser addon for 2.8 yet…

1 Like

This project looks very impressive. Great work!

OptiX seems to be Nvidia only so what about DeepDenoiser? Will it run on AMD cards? It seems to be the case but just wanted to make sure.

1 Like

My goal is certainly that it becomes part of an official 2.8x release. It is not planned to be an addon, because that would have too many practical disadvantages in my opinion.

7 Likes