Realtime SVO DAG path tracer using Pre computed 4D Visibility fields/ limit surface

Hi Guys ive started working on something ive been thinking about the idea of for a while. For the last 2-3 years ive been reading everything i can get my hands on when it comes to ray tracing/path tracing/real-time rendering and subdivision surface’s.

After months and months of letting the problems wash over the back of my mind i think i might of found a solution but need some feedback from coders and devs. As we all know voxel rendering is stupid fast, but unruly and excessively memory hungry.
Voxels make ray tracing path tracing very straight forward compared to poly rendering because how much more efficient it is to intersect test against simple geometric forms (bounding boxes). Ive built a simple glsl based path tracer (only does diffuse and shadows right now for testing) which im now looking to convert to a brand new style of renderer through off the shelf rasterisation cards.

The idea is to convert all geometry models to SVO’s, then to DAG’s. Each voxel contains Normal,Colour,Position data for the bounded voxel block,Reflection and inter/intra object visibility data. The renderer is totaly voxel based and doesnt use poly’s in any way. The new thing im pushing for is for each voxel to be converted into catmull clark subdivision surfaces from the SVO/DAG.

For this purpose ive been playing with open subdiv (as it works from quad mesh data, hence voxel like surface volume data) to take 4D visibility fields (not needed but speeds up ray tracing/Path tracing massively for secondary rays used for soft shadows and reflections/GI/AO).
Hence the main rendering engine is stupid fast as never touches a single Tris only voxel’s. The open subdiv adaptive feature is then used to render the voxel passed catmull clark patch by screen space on GPU with a screen space quad (hence fits perfectly with my deferred real-time rendering engine approach) and the poly rendering nature of cards can be leveraged by voxels never being rendered to screen but the on the fly tessellated tris provided by the graphics hardware.

This also means things like anti aliasing are easy and using PTEX (or mega texturing better suited for real-time) can evaluate all normal and colour, displace, position data for the patch hence separating the PBR shading model from the voxel engine structure. I need people to give feedback, and ultimately help if possible for the full implementation. Let me know what you think. Cheers

J

Hi, this is not meant as an offense but i am always wondering why people put there knowledge and time in new projects when it is more easy to make Cycles or may Luxrender better. Blender really need advanced render experts.
Brecht already work on Open Subdiv integration may you can speak with him.
As a side note your text is nearly unreadable without paragraphs and more whitespace.

Cheers, mib.

Is this ment to be something like voxel mode for Cycles? or an external renderer?
I can’t give you proper feedback because I am only beginner developer (currently developing my first path tracer) and I am slowly starting to study Blender’s code. But since I prefer voxels over polygons I would love to see this happen.
If you need feedback or help you can contact a guy behind http://atomontage.com/ he knows A-Z about voxels and he was also giving tips to mokazon about his (dead?) voxel project.
I would like to help you, but there is still a lot to learn before I can do something like this…

A historic weakness in voxels is that a lot of games and other software that use them either end up having blocky graphics or graphics that look low-resolution in terms of detail (that is after using things like the marching cubes algorithm).

How do you plan on rendering highly complex scenes (the type that otherwise would have over 10 million polygons), scenes with many tiny and fine-grained details without seeing them lost in the conversion or without seeing any blockiness in them? You could end up arguing that you will try to go the Euclideon route which is claimed to become the advent of unlimited detail, but their demos mainly involve an extremely heavy use of instancing or using instances of the same data thousands of times over.

I will be honest and say that I barely understood anything you wrote. But from what I gather you are basically trying to make an offline render engine that converts models into voxels, use those voxels to calculate GI, AO, soft shadows, etc - and then converts the voxels back into polygonal meshes, is that correct?

If so I have a few questions, mostly regarding the mesh-to-voxels-and-back conversions.

  1. Won’t such a conversion take a long time to “bake”?
  2. Will you not loose much surface detail in the conversion back to catmull-clark subdivision mesh?

Another question would be memory efficiency. Since Pixar’s subdivision technology is GPU-based, will the voxel data also be stored in GPU memory?

Sorry if my questions are stupid, I am not too familiar with how render engines work.

@mib2berlin, Yep your not wrong. Ive spoken with Zalamander about this in the past, I have pretty bad dyslexia, not only in writing but speech (which means my grammar and spelling is uhhhhmmm not great. We both had a laugh when i told him about the time i went to a job interview and answered the question perfectly word for word BUT in reverse Yoda skills. I try not to overly correct my typing with spell check etc because it’s actually quite helpfull for me to see.

@Ace Dragon, Yep thats why ive changed the whole idea from whats the norm, this is the research part of what i was talking about. The idea is to use all the best parts of voxels but not use the short cummings in the render.

Im working on the SVO/DAG o be converted on export from tool to Open Subdiv Catmull clark patch’s. No voxel is ever rendered, the voxel is converted to Patch and rendered with polys over a screen space aligned quad.
Hense why i can leverage open subdiv in the rendering, all renderable surfaces are your normal rasterisation poly system using the tesselation unit of all modern GPU’s, Means no Blockness, Anti aliasing is simple, but all backend rendering engine components for raytracing/path tracing work on voxel data and Ptex/PRT/Mega texture data for all voxel screen spaced tessellated data for material shading.

Try to think about it as two level renderers, you have a voxel system that is 99% of the engine, but to display the image per frame the tessellation unit of the card interprets the voxel data to catmul clark patch’s that equate to one patch per voxel (Worse case, think of this in terms of LOD. One voxel block from fardistance could contain hugh shading data).

Then the card can just render the poly’s like any normal rasterisation engine. First started playing with the idea in my opengl renderer for screen space object displacement mapping, why subdivide a model in world space and all that comes with that when you can just SLAP a screen space quad over the view camera and subdivide the shit out it for free, apply the displace map to the object as normal but subdivision to support the dispace map is done entirely in screen space.
That’s what gave me the idea of separating voxel rendering methods from rasterisation using the tessellation unit on modern cards.

Also to answer someone else, This is designed for realtime rendering (game engines etc) but as part of my thinking also to support offline rendering. Converting a mesh to 4d visibility fields would be an offline process that would take time, but if you don’t need that(for offline rendering) then you don’t need to do it.
But by creating 4d maps offline before hand to help the raytrace intersection testing you can achieve massive speed increases to the secondary bounce stuff when realtime rendering.

Your answer to Ace sounds a bit to me like you’re trying to re-invent the cubemarching algorithm with CM sub-d?

Ive tried to think about it in terms like that my self. But it’s not that simple. For example with current Open subdiv you still need to have support in the high end modeller export tool to convert the model from quads to subd patch’s with crease control.

The open subdiv realtime section (doesn’t matter if you use Opencl, Cuda,OpenMP,OpenGL,DX11) still is just rendering the original model converted to patch’s) is near as standard to separating the original data to transformed data.
Im just trying to do the same from an underlining voxel rendering system (a patch is a patch, Being math based for conformant surfaces why open subdiv was created to match realtime the structure of offline rendered models means each patch can be evaluated under a known mathematical constant. Matching a voxel to a patch in such fixed math is actually quite simple, relatively).

The main difference from every other current style real time renderer is im only using the Tessellation features of the realtime card for final frame processing (mixed with your normal MRT buffers within GL). For really simple understanding try to think about it in terms of a voxel is just a box, open subdiv does the same on conversion of mesh’s into subD patch’s that if you look carefully at opensubdiv videos showing the kit are? Guess what square. It’s so simple in principle i can’t believe i didn’t think about it sooner.

Sorry, your mind overtakes your sentences and all information is lost in an avalanche of words, which even those with knowledge about the topic have a hard time following. I’d not expect much feedback if I was you :smiley:

Anyways. Still sounds to me that you want to match a tessellated polygonal approximation of an iso-surface to a voxel-cloud?

That’s good, but we need to see it in action.

@3DLuver.

Sounds awesome.
Have you tried yet to build a simple test case to see if its at all possible?
Maybe you should write a mail to the cycles mailinglist or on the luxrender forum to get some of the top dogs to comment on this thread?

Ive been trying to modify the conversion process from tris based base model (sculpt) to SVO/DAG to also store Crease data as well as position,Normal, data. Still early but looking promising. Then having to modify the Open subdiv routine that converts base mesh (cage data) into sub D patch info for the realtime view based tessellation system.

It’s going be tough to get done, but ive started contacting the authors of some nice recent white papers that have been dealing with SVO/DAG methods to see if willing to help. Only time will tell but in theory it should be possible and ultimately stupid fast. Matching the unlimited detail possibilities of voxel structures with the equal unlimted detail (GPU Tessellation unit dependant) sub D patch’s should mean scene’s that COULD have model and shading characteristics of multi billion poly scene’s.

The fact that the realtime tessellation system with open subdiv uses texture detail (True displacement maps, In theory meaning you could even drop normal maps altogher if needed or keep them for even more surface detail) means you could only have 2 million tris (subdivided on the GPU realtime) but modelling detail through the textures to displace the same geometric data of a 40 million poly sculpt, the system dynamicaly changes the mesh resolution on the fly to only add tris where needed based on camera distance and angle.

But because the back end of the engine is using SVO’s (Blocks to raycast against) getting reflection/glossy surfaces/ soft shadows/multi bounce GI is still super fast. Voxel data passed to patch data for render and shading eval.

I’m familiar with SVOs, but what does DAG stand for?

Directed Acyclic Graph! It’s so simple!

Seriously, here is the paper i’ve heard about SVO DAGs and i suppose is the one he’s referring.

Edit: whoops, the paper i linked is not for redistribution, removing it. Though is the second result on google if you search for “Sparse Voxel Octree DAG”. Not my fault if it’s there :o

  • link deleted -

directed acyclic graph (DAG). This paper is what really got my grey matter fireing. Great paper , After looking at there system i think there’s even more room for decreasing size, there using parts of the SVO that are identic to remove unneeded data by using pointers to to the first data set of identical area’s. But i dont think they have even looked at adding symmetrical extents to the algorithm. Obviously as they proved lots of data is redundant, but then also looking for area’s that are symmetricaly identical again could move things even further but at this point i dont know if they are using symmetrical routines in the reduction.

Just a quick update to show things are moving on in this area FAST. Jacco Bikker has just posted a video about his new realtime path tracer that looks awesome, Arauna 2


.

This Dude is the Daddy when it comes to path tracers. Also there’s a new shader toy demo that does realtime path tracing through webgl including reflections and GI https://www.shadertoy.com/view/ldsGWB, the Opengl path tracer ive been working on is very similar to this code, we both implemented both the ray marching and first diffuse bounce into one pass. Ive now amended my code to also use some nice little speed ups from his code. Now just got to get hold of Jacco to run some of my idea’s from above by him. Ill keep people updated as things progress. :slight_smile:

Note this is running on two Titans, runs at 1280x800 (so a smallish panel in Blender, not fullscreen), uses game-like polycounts, uses phong for specular rather than actual reflections, has a large number of direct lights and the walls and floors are intentionally kept grayish to reduce the reflected energy, and therefore noise.

All in all this is a very, very well chosen test-scene :wink:

Cycles could probably compete for speed quite well and beats it in the all-important shading department handily. The real test would be if he threw out all the lights but one or two and used hipoly assets and textures.

Look closely at the video my friend, for example on the weird robot thing with a dome glass cover. Those are realtime path traced reflections. Realtime post pro system with light flares etc, DOF, IES lights, Soft shadows. I would love to think Cycles could compete with this for speed but i find that very hard to believe.

PS there’s a demo download for CUDa users linked on the youtube page. Try it out!

Honestly I can’t tell from that video. The author keeps moving the camera, never letting the noise clear up, and then it’s heavily compressed by Youtube.
Maybe there’s a teensy tiny part that is doing realtime reflections and refractions. Maybe not - with 76 lights in the scene we may well be looking at phong reflections of many lights. Either way it doesn’t matter, 99% of the surfaces are phong.

Personally I don’t have double Titans to do a comparison, but the scene is freely available, if you can export it from unity to Blender.

well then your blind, there’s lots of reflecting surfaces in the scene, yeah it’s a phong model but so is Brigade 3. Thats one of the beauty about path tracers reflections are easy (non glossy super easy), why would they not incorporate reflections lol. Id suggest not rubbishing something when your clearly wrong, if you realise it or not this is ground breaking work by Jacco, as usual.