I’ve been thinking a lot about path tracing, its strengths, its weaknesses, and its uses. This would be neither realtime, nor fully without artifacts, but I think it might just work. The target would be small animation/rendering projects that want to tackle fairly tricky scenarios but don’t have access to super powerful render farms.
My idea involves SBVH (the Spatial Split BVH). This is a BVH that more closely follows the geometry by reducing or in many cases eliminating bounding box overlap. This might be useable as the rectangle data for Voxel Cone Tracing.
This can help save memory in the process of creating an environment conductive to both types of light tracing.
The other half of my idea is to use Voxel Cone Tracing as an aid in sample distribution within the raytracing half, especially for indirect diffuse-like interactions. Direct light would be sampled normally.
Heterogenous volumetric objects would use a normal sparse voxel octree (think OpenVDB).
Overall process to utilize both CPU and GPU:
- CPU: Build SBVH
- GPU: Voxel cone tracing (solid only) CPU: Volumetric handling (RAM reasons)
- GPU: Ray tracing using sample distribution CPU: Photons for caustics (utilize cone data to figure out where).
Is this a viable approach, from a technical standpoint? Why or why not? How could this be done better? Thoughts?