Simulating point Cloud effect

Not sure which category this should be under, but here seemed as good as any.

Does anyone have any ideas on how to simulate a “point cloud” effect?

What I mean is, I am currently working on a project that is a fly through a scene. That scene is made up of 2 or 3 photogrammetry point clouds, but point clouds that were intentionally poorly done. So, each is taken from a different angle of the same scene, and then superimposed over each other in space, but then alternately hidden and revealed for a “glitchy reality” effect. So, each point cloud shows the scene from one general perspective (say, one is generally a head on perspective, the other is generally a side perspective). But because the photogrammetry was “poorly done”, it looks a little ok looking from the exact perspective of where the most photos were taken, but there are huge missing chunks, especially from places that were occluded and not well filled in with more pictures. additionally because there are not enough photos to fully flesh out the depth, objects can look contorted in depth. but it still looks “good” because the actual photos are what are texturing the point clouds. By flashing between several of these point clouds, a really interesting “glitch reality” effect is created.

My question is, what if I had a totally CGI scene? How could I replicate the effect of “bad photogrammetry”? How could I capture the color an depth information from one camera from one or more locations and “simulate” the process of building a point cloud?

Would this be a pure coding endeavor, or is there any conceivable way of doing this with nodes?


1 Like

Option 1 = Difficult
There is a way to find or create a very cool shader. There are unlimited capabilities with custom shaders. Sure to some extent a GLSL shader (since OpenGL is very popular for all developers and OSL is limited only to those who do rendering) can be ported to OSL but the real downside is that only experts can do this.
P.S. Perhaps not so difficult, since I see that they use a point texture in transparent surfaces.

Option 2 = Not Bad
Render only the vertices of the mesh (not the surfaces) however this would require you to pump really high details. Say for example a cube has only 8 vertices and this is not what you want. You would prefer a cube having about 100 of these vertices.

Option 3 = Simple
The most simple approach is to just have the surfaces emit particles and then render these particles.

1 Like

Man, thank you. One of those situations where some of that seems so obvious is retrospect, but I probably wouldn’t have stumbled upon it without fumbling for quite a while! Thanks!

Another simple option is to use the voronoi texture, Im using Object coordinates because this scene from blendswap didnt had good UVs, but it should be better with UV coordinates.

In fact, I preffer the particle method that @const said.

1 Like