Does anyone have tips on returning/calculating an array of partially occluded edges from the current view?
I’m projecting 3D geometry to 2D for export but when I overlay the projected edges some of them that should be partially occluded, overlap the projected faces. Fully occluded edges are fine because they won’t be returned as a visible edge but partially occluded edges are returned as visible.
I’ve tried multiple ways to do this (using a shader with the viewport overlay edges as a mask in a GPU shader, using the 3d z-buffer before projecting, and assessing the 2d projection for edge overlaps) but none have been successful.
Using a mask in a GPU shader did sort of work but I need the edge coordinates to export to file correctly, so this isn’t a workable solution.
I’m not asking for code but if you have any suggestions or pointers for the best approach, please let me know.
Depends on how fast you need it to be… The most direct and accurate method would be raycasting from the viewport camera to each visible edge’s verts. Build a list of verts from that initial set that are occluded, any edge that uses an occluded vert is partially occluded.
There might be some way to leverage a BVH tree to do overlap testing, but a BVH tree takes time to build, so it might not be any faster than just raycasting anyway.
I tried a variation of the raycasting but it had two issues with this problem. The first is, that once you identify a partially occluded edge (by raycasting to see if both verts are visible) you then have to raycast all the way along the edge until you no longer intersect with whatever is in front of it. If you have lots of edges, this is very slow and defining the distance between each raycast is difficult as you may miss the precise intersection point where it’s no longer occluded.
The second issue is that some edges will be occluded in a way that means both of the end verts are visible. This makes it very difficult to find partially occluded edges with this technique.
I don’t mind if the calculation takes a little time, so I’ll look into BVH tree overlap testing. Cheers
ah okay that’s a much different beast, I thought you were building a list of entirely visible edges- your shader-based approach is sounding much better right about now. there’s probably a clever way to convert the 2D buffer results back into 3D coordinates if the only problem was getting them back into a list of coordinates. using the input list of edges and a bunch of 2D intersection coordinates it shouldn’t be too difficult to transform the input edges to screen space coordinates and then do a point->line test with each of the intersection points you found to determine which edge it belongs to (and project the 2D intersection back onto the edge vector).
I agree, this way seems to be a good direction to go in. The problem is though, with GPU shaders you can’t return anything (as far as I know). If you know how, please let me know. As you say if you could get them back as a list of coordinates it would probably be quite a trivial process.
this is true, you can’t access that information directly but you can get the GPUTexture’s buffer and read back the pixels to get the 2D coordinates that way