Experimental 2.77 Cycles Denoising build

Okay. I tried to use a new folder, cloned blender git again, cloned the libraries with svn and still got the same errors. I disabled warnings as errors. When building the “Projektmappe” I now get around 8000 Warnings and nearly all are pointing to DNA_scene_types.h (Lines 271-279).

And there is one error: Fehler 9207 error LNK1104: Datei “…\lib\Debug\bf_bmesh.lib” kann nicht geöffnet werden. L:\release\source\creator\LINK blender
I can build master from source without a problem.

@BeerBaron,

My post: https://blenderartists.org/forum/showthread.php?395313-Experimental-2-77-Cycles-Denoising-build&p=3069914&viewfull=1#post3069914

And your reply:

Quote: Why do you believe that this is going to be the “magic bullet”? The traversal times are much worse than BVH for higher resolutions. For offline rendering, this is not interesting. For realtime rendering this is still way too slow, just like all other solutions. In some vague future where hardware is fast enough to take enough samples for realtime path tracing, acceleration structure build times will likely be negligible, just like for offline rendering today. We also already have raytracing hardware that can do fully dynamic scenes and is 10x more power efficient than software GPU solutions.[/FONT]

Well, Unlike you i actually test this stuff. Funny that for such an opinionated person you have never shown any work or test’s of your own., Thats a deal breaker for credibility.

Here’s some correspondence between me and Kostas Vardis who came up with DIRT,

After scanning quickly through your paper i see you feel there is a weakness with high frequencie details, have you thought about Adaptive Rendering based on Weighted Local Regression (paper & code http://sglab.kaist.ac.kr/WLR/ ).

Resposnse from Kostas:

No, this problem occurs in methods which perform fragment-based intersections (MMRT) and this is what we fixed with DIRT. These methods have issues with high frequency effects since the intersection test is approximate (geometry is discretized into fragments and intersections are performed against this -> check my hpg presentation). Their advantage is that they are more scalable in terms of performance (since you do not require accurate results) as you can cut down the number of samples per ray, essentially providing a trade-off between speed-quality. DIRT provides the same quality as any spatial-based method since it is based on accurate ray-primitive intersection tests and such tricks cannot be performed there.

Question:

>> Even though this is a offline method im wondering if using the depth min max from your system depth buffer could be used for a cut down version WLR, maybe even using a very short ray length screen space AO method to capture high frequency geometry detail to weight the sampling, as it’s a reusable buffer could you accumulate over multiple frames to a WLR buffer for adaptive sampling and noise removal? Being able to use multi frame temporal re-sampling could really speed things up magnitudes over the current implementation (if i understand you mask system that is, could temporal re-sampled areas that don’t need re calculation just be added to the mask?.
Maybe, I would have to think about it as I do not remember specific details on that paper. Temporal expansion of the depth bounds could be interesting. We only employ the depth bounds for empty space skipping in the depth-domain and hierarchical traversal.

Kostas reply:

In general, there are several things here. I think you are mainly interested to improve convergence. While this is totally acceptable and surely needed, the actual bottleneck in these two papers is not convergence (or construction) but traversal since: (i) the acceleration structure is not as good as a BVH (ii) nothing is being done for coherence (there are way too many things that can be applied here) and (iii) my OpenGL implementation must be improved as I have numerous buffers attached and redundant state changed performed all over the place. As a consequence of (ii) and (iii), my traversal times in DIRT currently seem to be resolution dependent, while, theoretically, they shouldn’t be. Check the DIRT vs Optix figure for the indirect bounces. This is both an implementation and coherence issue. I am quite positive you can get a 2x speedup in almost everything with a better OpenGL implementation. Convergence would be the next step.

NOW, what i was talking about with this being the magic bullet was about future work, GPU manufacturers dont have to re-engineer their cards for future releases based on raytrace architectures. Lots of things that can be added to the rasterization pipeline of current designs to make this even quicker.

You also ignore the point i made about being a deffered system (like pixar studios version) you don’t need to have ALL the scene’s geometry in memory and textures etc before starting to trace rays, MASSIVE memory saver that even more pronounced when talking about GPU rendering.

Offline rendering also benefits as SHOWN in the paper if you read it correctly even tested against Nvidia’s Optix raytracer. The DIRT system still converges to ground truth results against Optix, so yeah, having a scale-able realtime approach that can still be ramped to ground truth is a magic bullet.

Want a quick decent looking approximation, or a final render? the system works well.

You NEED to wind your neck in, Unless you have something of your own work to prove otherwise. Haters always hate.

What did you actually test? I don’t have to test anything to know that it scales worse than BVH, I just have to read the paper. You didn’t even read the paper properly before writing an email to its author, who then has to explain to you what’s actually in there.

Funny that for such an opinionated person you have never shown any work or test’s of your own., Thats a deal breaker for credibility.

I don’t disagree, but then again I don’t care at all whether people on BA question my credibility. Opinions should always be taken with a grain of salt, no matter who utters them. On the other hand, a lot of what I say is factual, so it doesn’t matter who I am, just do your own research if you don’t trust me.

NOW, what i was talking about with this being the magic bullet was about future work, GPU manufacturers dont have to re-engineer their cards for future releases based on raytrace architectures. Lots of things that can be added to the rasterization pipeline of current designs to make this even quicker.

I see. That’s not really a “magic bullet” though, that’s more of a “pie in the sky”.

You also ignore the point i made about being a deffered system (like pixar studios version) you don’t need to have ALL the scene’s geometry in memory and textures etc before starting to trace rays, MASSIVE memory saver that even more pronounced when talking about GPU rendering.

I don’t see how that’s relevant for realtime rendering, so I’ll gladly ignore that.

Offline rendering also benefits as SHOWN in the paper if you read it correctly even tested against Nvidia’s Optix raytracer. The DIRT system still converges to ground truth results against Optix, so yeah, having a scale-able realtime approach that can still be ramped to ground truth is a magic bullet.

How does it benefit? It’s slower than BVH and it doesn’t scale. That’s literally what the author told you personally (it’s also in the paper). Again, I’m not talking about some hypothetical improved version, I’m talking about the actual results.

You NEED to wind your neck in, Unless you have something of your own work to prove otherwise. Haters always hate.

You NEED to get your head out of the clouds. I refuse to apply your “this could be the magic bullet!” reasoning to every paper I consider promising, that doesn’t mean I’m a hater.

For your argument to make sense, lots of future unknowns would have to turn out true, and you didn’t even mention those in your original post. Pardon me for not buying into that.

Hi, some late night commits from Lukas fix some bugs and add postprocess denoising.
For example:

https://developer.blender.org/rB0d78ac4199d9c6d8786019710632100ecfd8df58

@Lukasstockner97, where is that magic “Postprocess” button? :slight_smile:

Cheers, mib

Hi,

first of all thank you for this build!
I tried the version of the first post and result are amazing!

As I understand there are already many progress since that build and I would like to try the new one: can you tell me where can I find a guide to build it?

Hi rvb3n, start > https://wiki.blender.org/index.php/Dev:Doc/Building_Blender
First get Master building then you can change to all branches including denoising branch.
For questions step by #blendercoders at freenode.net.

Cheers, mib

It is alone in a post-processing panel in Properties column of Image Editor.
It is weird that is not in Denoising panel with denoiser settings changed between each button press.

Thank you. I could have spent a lifetime trying to find that button :slight_smile:

Perhaps someone has already asked this, I do not remember. In the final denoising project, Could it be possible to keep the original image without denoise and the denoised image in the compositor? That is, be able to choose the image with or without noise reduction in the Render Layers node in the compositor, and work with them at the same time.

i say yes from what i remember. Why not add it to render passes as Clean render layer and a lwr filtered renderlayer?

something like this:


Yeah, something like that would be good.

Well, I had asked this for the following reason. In the RAW photos usually there are more noticeable noise in dark areas or shadows. That’s why filters in photo RAW softwares usually have options to apply more strongly the noise reduction in dark areas, and preserve detail in highlight areas. I have seen in interior scenes with Cycles that noise is also more noticeable in dark areas (not talking about fireflies). So I was wondering if through some luminance key mask between both images I could achieve something like that (preserve details in highlights, and strong noise reduction in dark areas).
Anyway I could try this right now in compositor, I’ll do when I have some time.

It looks like images can be denoised separate from the render session now (meaning you can tweak parameters according to what gives the best result)
https://lists.blender.org/pipermail/bf-blender-cvs/2016-July/088406.html

It looks like the project is going very well so far :slight_smile:

It work fantastic, I render on GPU switch to CPU and use post process denoise.
You can change settings and denoise again.
Result is only updating after change pass in image editor.

Cheers, mib

Still not able to build it on Windows 64bit.

After the last commits I get different warnings originating from Lines 271-279 in DNA_scene_types.h.

As before there is following error:

Fehler    7585    error LNK1104: Datei "..\..\lib\Debug\bf_bmesh.lib" kann nicht geöffnet werden.    L:\release4\source\creator\LINK    blender

And this one:

Fehler    7586    error MSB3073: Der Befehl "setlocal"C:\Program Files (x86)\CMake\bin\cmake.exe" -DBUILD_TYPE=Debug -P cmake_install.cmake
if %errorlevel% neq 0 goto :cmEnd
:cmEnd
endlocal & call :cmErrorLevel %errorlevel% & goto :cmDone
:cmErrorLevel
exit /b %1
:cmDone
if %errorlevel% neq 0 goto :VCEnd
:VCEnd" wurde mit dem Code 1 beendet.    C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V120\Microsoft.CppCommon.targets    132    5    INSTALL

I can build master without any problems.

Hey Lukas,
My build is still hanging up

Windows 7 x64, MSVC 2013 Express, Cuda Toolkit 7.5, Cmake...(aging)Q6600 cpu, 8 gig Mem, Evga 950 gpu with 368.22 Drivers




"D:\MyBuilds\patchtest\make_msvc2013\source\blender\bmesh\bf_bmesh.vcxproj" (default target) (7) ->
(ClCompile target) ->
  D:\MyBuilds\patchtest\blender\source\blender\makesdna\DNA_scene_types.h(271): 
error C2220: warning treated as error - no 'object' file generated 
(D:\MyBuilds\patchtest\blender\source\blender\bmesh\intern\bmesh_marking.c) 
[D:\MyBuilds\patchtest\make_msvc2013\source\blender\bmesh\bf_bmesh.vcxproj]

Trunk, GSoC-2016-uv_tools, GSoC-2016-improved_extrusion all recent, built ok

Cheers -n- Happy coding

I was able to build blender successfully thanks to this guide: http://blog.machinimatrix.org/building-blender/

But now I don’t find much information to build blender using another branch…

I was pointed to this page: https://git-scm.com/docs/git-checkout but I don’t understand much…

Hi.
To see which branch you are currently:

git branch

See available branches:

git branch -r

Search for a specific branch (in this case “denoi”):

git branch -r | grep -i denoi

Change to the desired branch:

git checkout soc-2016-cycles_denoising

Update within that branch:

git pull

Edit:
Oh, you’re in Windows. I do not think “grep” works for you. Anyway everything else should work.

Thx YAFU! So after git pull I simply build again from Visual Studio?

I do not know on Windows. On Linux I just run “make all”

But you try as you usually compile Blender from master when you are inside soc-2016-cycles_denoising branch.

Well no luck, the process finish with this message:

========== Build: 20 succeeded, 1 failed, 120 up-to-date, 0 skipped ==========

and if I run the build Blender is working but I don’t have hte LWR filtering checkbox in the Sampling panel… :confused:

Is anyone able to share a working windows build?

Yes, there have been changes

https://wiki.blender.org/index.php/User:Lukasstockner97/GSoC_2016/Documentation/Updated_workflow_proposal