Cycles noise reduction with VapourSynth

I recently had the test noise reduction with VapourSynth framework
The software has been heavily inspired by Avisynth and have same plugins power and cross-platform.
This is my result

As you can see, this is python code. VapourSynth works on python.
I think it would be a GREAT if it were possible to get a pipe: “blender compositor node” -> VapourSynth -> “blender compositor node”.

1 Like

This looks pretty neat, i’ll have to give it a try. Is this only for animated noise or does it work with static also? Perhaps since it’s all Python it shouldn’t be so hard to make an add-on…

I’m not terribly familiar with python, how to I use this?

A word.
Incredible. I really like bmw update that.

The result is great, could you do a youtube on how to do this ?

Absolutely amazing, thank you for sharing!

neat results!
But the program is highly complicated for non coding people so a tutorial would be nice.

Additionally I would love to see a comparison between the Vapoursynth output and the results of the Progressive Animation Render Addon.

That add-on doesn’t do any noise reduction at all (it just renders the animation several times and combines them ala image stacking) - and last I checked it’s a bit buggy too. I plan to rewrite it… eventually.

Vapoursynth looks really sweet!

please, any clue about how to use the script?

It should be applied outside Blender, where the jpg images have been saved?

EDIT / Update

As brothermechanic said, you can use VapourSynth from Gentoo Linux.

For users of Ubuntu and Ubuntu family or derivatives, “djcj” (PPA maitainer) has already solved the problem with “imwri” and missing filters in the PPA. Thanks djcj!

sudo add-apt-repository ppa:djcj/vapoursynth
sudo apt-get install vspipe vapoursynth-extra-plugins vapoursynth-editor

it is recommended that you also install “ffmpeg” and “libx264-xxx” (xxx is the version depending of your distro).

When you see in the script a line like:

src = core.imwri.Read('camera1/%04d.jpg', firstnum=1, alpha=False)

You remember there replace with the path where you have your images. For example, if you saved your images from blender as “png” in “/home/YOUR_USER/render images”, the above line should be:

src = core.imwri.Read('/home/YOUR_USER/render images/%04d.png', firstnum=1, alpha=False)

From the terminal opened where you have the script, to get the video I am using the command:

vspipe --y4m script.vpy - | ffmpeg -i pipe: -vcodec libx264 -crf 10 encoded.mp4

This creates a video at 30 fps (later will say how to change the frame rate). “crf 10” is the video quality. Lower value, higher quality and size (see ffmpeg manual).
In the script in post #51 you need to do some modifications to get the output with the above command, otherwise you need to configure the output path in one line near the end of the script.

Script collection:
Scripts Denoise for Blender_Cycles with (4.4 KB)

Remember that some filters used only supports 8-bit images, so unfortunately with these scripts we are limited to using only 8-bit images.

And thanks to ‘brothermechanic’. All the scripts are basically mods of his scripts.

After spending 12 days rendering a Cycles animation sequence across multiple computers…this looks really interesting.

When originally rendering the animation, do you use the same sampling seed value for all of the frames, do you turn on the random seed value setting, or does it not matter?

You need to animate the seed values with the frame number, otherwise you would not get that type of jittery grain.

OK, this is a major pain in the ass to set up. I’m getting an error in the vapoursynth editor:

Python exception: No attribute with the name imwri exists. Did you mistype a plugin namespace?

It’s whining about imagemagick but i cant figure out what the deal is exactly. I used a build from the PPA.

It seems that it is related to an ImageMagic plugin, and apparently is not included in the packages that PPA:

So how do i get i to read an image without imwri? It doesn’t want to compile because it can’t find a “tesseract” commandline OCR package even when it’s clearly there…

Can the parameters be noodled in the composition editor? That would be really ideal.

@Pesho, I’m in Kubuntu 14.04 and I’m also having a lot of problems to compile. To compile without OCR use:

./configure --disable-ocr --enable-plugins

Besides, I get the error:

configure: error: imwri plugin requires ImageMagick with quantum depth of 16.

but imagemagick documentation say that it is compiled by default with “–with-quantum-depth=16”. So I do not know why Ubuntu packages do not use depth=16

Now I’m compiling ImageMagick, but without uninstalling imagemagick packages because it creates a lot of dependency problems when trying to uninstall.

Compiled … But now I get the error:

vspipe: error while loading shared libraries: cannot open shared object file: No such file or directory

that they mention here:
So, trying to fix it…

I got exactly the same issues on Linux Mint 17 which is 14.04 (Trusty). I gave up when it brought up the quantum depth setting of 16 because no such ImageMagick+±dev packages exist.

Even after commening out the line that uses imwri it starts complaining about the next line:

No attribute with the name fmtc exists.

So i’ve more or less given up until OP chimes in on the overly specific package of VapourSynth they’re using.

I compiled and installed imagemagick:

and then I was able to compile vapoursynth. To avoid the errors that it gives me I think I should run it with:

PYTHONPATH=/usr/local/lib/python3.4/site-packages LD_LIBRARY_PATH=/usr/local/lib vspipe

(python 3.4 installed)

But I need @brothermechanic give me some clue about how should I use this script. I understand that it should be applied to the resulting Blender images: 0001.jpg, 0002.jpg … but I have no idea how.

Did you install vapoursynth-editor? That’s where i get all the errors from. Line 4 should be the one that indicates where the image is, and that’s the one giving me the imwri error. Did you get any fmtc-related errors?