@BeerBaron,
My post: https://blenderartists.org/forum/showthread.php?395313-Experimental-2-77-Cycles-Denoising-build&p=3069914&viewfull=1#post3069914
And your reply:
Quote: Why do you believe that this is going to be the “magic bullet”? The traversal times are much worse than BVH for higher resolutions. For offline rendering, this is not interesting. For realtime rendering this is still way too slow, just like all other solutions. In some vague future where hardware is fast enough to take enough samples for realtime path tracing, acceleration structure build times will likely be negligible, just like for offline rendering today. We also already have raytracing hardware that can do fully dynamic scenes and is 10x more power efficient than software GPU solutions.[/FONT]
Well, Unlike you i actually test this stuff. Funny that for such an opinionated person you have never shown any work or test’s of your own., Thats a deal breaker for credibility.
Here’s some correspondence between me and Kostas Vardis who came up with DIRT,
After scanning quickly through your paper i see you feel there is a weakness with high frequencie details, have you thought about Adaptive Rendering based on Weighted Local Regression (paper & code http://sglab.kaist.ac.kr/WLR/ ).
Resposnse from Kostas:
No, this problem occurs in methods which perform fragment-based intersections (MMRT) and this is what we fixed with DIRT. These methods have issues with high frequency effects since the intersection test is approximate (geometry is discretized into fragments and intersections are performed against this -> check my hpg presentation). Their advantage is that they are more scalable in terms of performance (since you do not require accurate results) as you can cut down the number of samples per ray, essentially providing a trade-off between speed-quality. DIRT provides the same quality as any spatial-based method since it is based on accurate ray-primitive intersection tests and such tricks cannot be performed there.
Question:
>> Even though this is a offline method im wondering if using the depth min max from your system depth buffer could be used for a cut down version WLR, maybe even using a very short ray length screen space AO method to capture high frequency geometry detail to weight the sampling, as it’s a reusable buffer could you accumulate over multiple frames to a WLR buffer for adaptive sampling and noise removal? Being able to use multi frame temporal re-sampling could really speed things up magnitudes over the current implementation (if i understand you mask system that is, could temporal re-sampled areas that don’t need re calculation just be added to the mask?.
Maybe, I would have to think about it as I do not remember specific details on that paper. Temporal expansion of the depth bounds could be interesting. We only employ the depth bounds for empty space skipping in the depth-domain and hierarchical traversal.
Kostas reply:
In general, there are several things here. I think you are mainly interested to improve convergence. While this is totally acceptable and surely needed, the actual bottleneck in these two papers is not convergence (or construction) but traversal since: (i) the acceleration structure is not as good as a BVH (ii) nothing is being done for coherence (there are way too many things that can be applied here) and (iii) my OpenGL implementation must be improved as I have numerous buffers attached and redundant state changed performed all over the place. As a consequence of (ii) and (iii), my traversal times in DIRT currently seem to be resolution dependent, while, theoretically, they shouldn’t be. Check the DIRT vs Optix figure for the indirect bounces. This is both an implementation and coherence issue. I am quite positive you can get a 2x speedup in almost everything with a better OpenGL implementation. Convergence would be the next step.
NOW, what i was talking about with this being the magic bullet was about future work, GPU manufacturers dont have to re-engineer their cards for future releases based on raytrace architectures. Lots of things that can be added to the rasterization pipeline of current designs to make this even quicker.
You also ignore the point i made about being a deffered system (like pixar studios version) you don’t need to have ALL the scene’s geometry in memory and textures etc before starting to trace rays, MASSIVE memory saver that even more pronounced when talking about GPU rendering.
Offline rendering also benefits as SHOWN in the paper if you read it correctly even tested against Nvidia’s Optix raytracer. The DIRT system still converges to ground truth results against Optix, so yeah, having a scale-able realtime approach that can still be ramped to ground truth is a magic bullet.
Want a quick decent looking approximation, or a final render? the system works well.
You NEED to wind your neck in, Unless you have something of your own work to prove otherwise. Haters always hate.