Hi Guys ive started working on something ive been thinking about the idea of for a while. For the last 2-3 years ive been reading everything i can get my hands on when it comes to ray tracing/path tracing/real-time rendering and subdivision surface’s.
After months and months of letting the problems wash over the back of my mind i think i might of found a solution but need some feedback from coders and devs. As we all know voxel rendering is stupid fast, but unruly and excessively memory hungry.
Voxels make ray tracing path tracing very straight forward compared to poly rendering because how much more efficient it is to intersect test against simple geometric forms (bounding boxes). Ive built a simple glsl based path tracer (only does diffuse and shadows right now for testing) which im now looking to convert to a brand new style of renderer through off the shelf rasterisation cards.
The idea is to convert all geometry models to SVO’s, then to DAG’s. Each voxel contains Normal,Colour,Position data for the bounded voxel block,Reflection and inter/intra object visibility data. The renderer is totaly voxel based and doesnt use poly’s in any way. The new thing im pushing for is for each voxel to be converted into catmull clark subdivision surfaces from the SVO/DAG.
For this purpose ive been playing with open subdiv (as it works from quad mesh data, hence voxel like surface volume data) to take 4D visibility fields (not needed but speeds up ray tracing/Path tracing massively for secondary rays used for soft shadows and reflections/GI/AO).
Hence the main rendering engine is stupid fast as never touches a single Tris only voxel’s. The open subdiv adaptive feature is then used to render the voxel passed catmull clark patch by screen space on GPU with a screen space quad (hence fits perfectly with my deferred real-time rendering engine approach) and the poly rendering nature of cards can be leveraged by voxels never being rendered to screen but the on the fly tessellated tris provided by the graphics hardware.
This also means things like anti aliasing are easy and using PTEX (or mega texturing better suited for real-time) can evaluate all normal and colour, displace, position data for the patch hence separating the PBR shading model from the voxel engine structure. I need people to give feedback, and ultimately help if possible for the full implementation. Let me know what you think. Cheers