Photogrammetry workflow

My goal is to find the best photogrammetry workflow using free software. So far I’ve got:

  1. Upload photos to 123D Catch then export created mesh as obj.
  2. Import obj to Netfabb, repair holes then export as stl.
  3. Import stl to Meshlab for poisson surface reconstruction then export as obj.
  4. Import obj to Blender for touch-ups and re-texturing.

Is this the best method? I’d love any comments or suggestions.

What kind of objects ? If buildings then I think manual modeling is the best and fastest and most accurate.

I mainly want to replicate people, mostly heads, for movie special effects. I’ve seen very few manual human models that can match the accuracy of photogrammetry. Besides, I’m not that artistic! :slight_smile:

Photogrammetry might not be the best solution for special effects, at least if you want to rig and animate the model. Currently I haven’t seen any photogrammetry results with decent topology - which is important if the mesh is going to deform properly. It might be more trouble that it is worth?

No plans to rig or animate, just replace the actors head with the model for a split second and selling it with editing, motion blur, etc.

this is a commercial software that will reproduce a head or anything else from photos. topology is pretty ugly though. and it costs money.
http://www.agisoft.ru/products

You have a couple of other options.
My workflow consists of

  1. Generate pointcloud in VSFM (free for non commercial use) or Bundler (open source)
  2. Generate Poisson in Meshlab.
  3. Generate Texture using “Paramaterisation + texturing from registered rasters”
  4. export to blender as an obj
  5. Generate a more dense Poisson.
  6. export to blender as ply
  7. retopo in Blender or Meshlab and generate UV map.
  8. bake diffuse texture from the obj
  9. bake normal, ao, displacement etc from ply.

I’d love to use Visualsfm but never get results as good as 123d. Poisson reconstruction is always a mess.

I find I can get better results (and a lot more control) with VSFM (PVMS2) for most projects however it takes a good deal of practice to take the photos right.

My pointers are -

  1. make sure you have enough overlap between the photos.
  2. use very diffuse lighting especially if the object has reflective surfaces. either use an overcast day or diffusers if using artificial lighting.
  3. if at all possible use a prime lens with low distortion.
  4. be careful that all your shots are sharp.
  5. have access to a big and powerful computer. I managed to use over 40gb of memory on a recent point cloud computation.

kayoslll, thanks for the tips! Do you find that there’s an ideal number of pictures to use for best results? I believe for 123D 70 is the max that you can upload.

I’m attempting this workflow but being the stupid person I am, my main OS is OS X (actually, I love Macs). I recently became obsessed with the concepts in photogrammetry, trying to re-create real world environments and objects from multiple photos. But blast it all, I can’t seem to get VSFM to compile on OS X and there seems to be surprisingly little support on the subject of doing so.

My workflow is dramatically lengthened due to the fact I have to boot into Windows, run VSFM there, then boot back into OS X where Blender and all of my projects reside.

I’ve come across other tools that look promising, but end up being too cumbersome to implement. I discovered ArcheOS, Python Photogrammetry Toolbox (which I was surprised hadn’t been ported into a Blender plugin), and have even attempted futzing around with adding multiple cameras to a Blender scene, then aligning specified points manually; but that turned out to be too complicated as well since I couldn’t assign a specific background image to just one camera (it seems to always default to the active).

This seems like a topic that would have been figured out by greater men than I, but info on it seems sparse at best. I’ve seen simple examples of it in software like 123D and software like Seene, but I would like something more like VSFM or Voodoo (which I remember using long ago)… localized on my machine where I have more control.