Not sure which category this should be under, but here seemed as good as any.
Does anyone have any ideas on how to simulate a “point cloud” effect?
What I mean is, I am currently working on a project that is a fly through a scene. That scene is made up of 2 or 3 photogrammetry point clouds, but point clouds that were intentionally poorly done. So, each is taken from a different angle of the same scene, and then superimposed over each other in space, but then alternately hidden and revealed for a “glitchy reality” effect. So, each point cloud shows the scene from one general perspective (say, one is generally a head on perspective, the other is generally a side perspective). But because the photogrammetry was “poorly done”, it looks a little ok looking from the exact perspective of where the most photos were taken, but there are huge missing chunks, especially from places that were occluded and not well filled in with more pictures. additionally because there are not enough photos to fully flesh out the depth, objects can look contorted in depth. but it still looks “good” because the actual photos are what are texturing the point clouds. By flashing between several of these point clouds, a really interesting “glitch reality” effect is created.
My question is, what if I had a totally CGI scene? How could I replicate the effect of “bad photogrammetry”? How could I capture the color an depth information from one camera from one or more locations and “simulate” the process of building a point cloud?
Would this be a pure coding endeavor, or is there any conceivable way of doing this with nodes?