Workflow for 2D image -> 3D modeling

I’ve struggled some time to develop a workflow for converting a set of 2D images into a 3D model without too much manual hassle and where I have good influence in the process.
My objective:

  • to use low cost tech (quad+photo camera) to map out an area and come up with a reasonable 3D approximation of some area, which can then be given to professional modelers as a start/basis to get dimensions and placements where they really are “in the world” without guessing this from photographs. These results can be generated in 6-24 hours.

The 3D model starts from the result of a technology called “structure from motion”. There’s a good video explaining that process here:

[video]www.youtube.com/watch?v=i7ierVkXYa8[/video]

I tried to bluntly import generated point clouds into blender and texture them with generated orthophotos but this was really tedious and wasn’t going anywhere. I then had crash issues and other problems due to the size of the dataset.

How it works:
First, I use a calibrated camera and I used agisoft Lens to do that.

Basically, I let visualsfm calculate the camera positions and set up cmpmvs to generate my point cloud (which is much better than the standard pmvs2 from visualsfm). This cloud is too detailed, so I use CloudCompare to resample it. CloudCompare doesn’t use/need normals or faces, so it’s a very basic point cloud with only x,y,z values here. When I’m ok with the work done and I have the detail I want, the low density cloud is exported and an excellent poisson surface reconstruction algorithm run over the resulting point cloud (I found that neither blender, meshlab, cloudcompare could correctly process the point cloud with the same precision and detail preservation).

The mesh resulting from that is then reimported into meshlab together with the camera positions, after which it’s simply a question of reprojecting the original images as a texture onto the mesh, generating UV coordinates in the process. Since the mesh was not rotated, scaled or translated, the camera positions are still valid. That result is exported as OBJ with UV and imported into blender.

There are some improvements required:

  • The video shows in the end result a mismatch of the texture on geometry. I think it’s because I did run this in different runs with different parameters, so camera positions were different (resulting in a different texture hitting that geometry). I’m confirming this now.
  • The mesh wireframe in blender looks really bad. Maybe converting tris to quads in meshlab helps.

If anyone tries this, has tips to improve this workflow I’d love to hear about it in this thread.

The misaligned texture was caused by the use of point clouds generated from two different runs of visualsfm and apparently the parameters were different. Here’s the result when the same set of parameters in visualsfm is used: