Realtime SVO DAG path tracer using Pre computed 4D Visibility fields/ limit surface

There’s lots of surfaces reflecting direct lights, not geometry. It’s somewhat similar to setting the glossy reflections to zero in Cycles. Yes, reflections are easy with pathtracing, which doesn’t necessarily imply they’re cheap when you make them rough and cover your whole scene in them. They cost precious rays that could otherwise be doing diffuse bounces.

My point is not that the renderer is rubbish, in fact it’s probably quite nice. It’s just that the author used his intimate knowledge of pathtracing to choose a very convenient scene and optimized the shaders and lighting very well. Which would be admirable if he was an artist. But since he’s trying to sell us on his engine, optimizing his scene that well is not very convincing.

But I’m not going to try to convince you, believe what you will. Again, it’s not that Arauna 2 is bad, it’s that Cycles is probably very close.

There is geometry reflecting in the surfaces, I can see it!, even if you can’t. This was never about this new realtime path tracer being better or worse than Cycles, But the field of realtime engine (Including game engine capability’s). No you wont convince me, because i can see geometry reflecting on surfaces and this scene is in noway optimised for a realtime path tracer. If the scene was optimised to get the very best out he would of picked an outdoor lighting scene (like in the Brigade videos because it’s a hell of a lot easier to do that than an indoor multi light environment with heavy shadowing. Obviously i can’t convince you, But please don’t clog up my thread with your clearly wrong and misunderstood understanding of what the thread was created about in the first place. Cheers

Hi, I am the author of Arauna2. Some quick notes: Arauna2 is using actual reflections using the Phong model, so if you set the specularity to a high exponent, it will approach a mirror. No tricks there, apart from the fact that a single ‘ubershader’ is used (nothing programmable). That’s actually a quite fundamental design principle: Arauna2 (like Brigade2) is constructed specifically with high performance in mind, and sacrifices generality. Rendering must be in-core, and path tracing is in principle limited to one bounce (an unbiased mode is also available in Arauna2, but it’s not the default renderer).
Regarding the scene: this has been chosen somewhat randomly, and since I didn’t create the scene, I don’t have control over its suitability for path tracing. However, I agree it’s pretty optimal, but mostly because of the low occlusion. There’s also a vid on YouTube that shows a Minecraft scene (about 2M tris if I’m not mistaken); above ground everything is nice, but under the surface, there’s a lot of noise.
Regarding my ambitions: this is mostly an academic thing. I enjoy researching path tracing, combined with engineering, and try to combine the best in terms of algorithms with the best in terms of optimizations. And then I want to apply that to games. But that’s not happening yet, it’s still too slow. But so much fun. :slight_smile:
So… anyway, if you have any questions about the renderer, let me know.

Well in that case i stand corrected. Slowly clearing noise compresses really poorly on Youtube so with something like reflections you cant be sure what youre looking at. Anyway it’s a cool renderer and i’d really like to see a comparison against Cycles :slight_smile:

Awesome, Mr Bikker is now a Blender artist member. Sweet i don’t have to try to track Jacco down, he managed to find me lol. Rest assured Jacco many questions and idea’s im working on will have to be discused shortly. Superb work with Arauna2 and many of your articles for Intel and other papers have been a big part of my learning process. Your a star bud, contact you shortly. Cheers J

More updates: I had a conversation with Brecht 6-12 months back about at some point a killer algorithm or killer combined algorithms should be able to solve the noise issue’s with path tracing/Stochastic rasterisation. Looks like this may well be near. Some very cool new papers and code have arrived that look like the magic bullet with work.

Layered Light Field Reconstruction for Defocus Blur: Mind Blowing Paper

For some reason it looks like i cant link direct to the paper so:

Morgan McGuire also has this pucker new motion blur system with code in the paper: Quality Motion blur as post pro

And this relates to my other thread about using SVO’s and DAG’s Mixed with new ways to cross the paths of ray tracing/path tracing and rasterisation systems mixed with open Subdiv, Very close to what im trying other than im looking at patch creation with catmull clark patch’s mixed with real-time tris based tessellation direct on GPU.

The Transvoxel™ Algorithm: Even with a little helper code function
Lengyel, Eric. “Voxel-Based Terrain for Real-Time Virtual Simulations”. PhD diss., University of California at Davis, 2010.