opensource 3D scaning with one camera +blender +ppt +meshlab

Hello, i dont know if this has been pointed out before but,
i found this cool turorial of how to make a 3d scan using an opensource program called Python Photogrammetry Toolbox and Meshlab
and on the same webpage there is an article about using blender tracking system to generate the point cloud and ppt, meshlab for the meshing/texturing

i think is reaally cool :smiley:

by the way, i just thought that for higher resolution texturing than a video we could make an stop motion animation using higher resolution photos in a travelling, perhaps a good way to do this (for small objects) is using a table dolly with a magic arm over a flat surface, (here is an ebay link to what i mean (i´m in no way related to the seller, just an example)), i believe that the camera is the one that should move and not the object (rotating the object in front of a tripod), because of the lighting changes and the background tracking.

so i believe, the question arises, ppt is opensource, (so is meshlab), so perhaps it can be integrated into blender, even though i dont mind using both softwares, it is most definately a cool feature to have


thanks for those links, it’s indeed really impressive :slight_smile:


Been looking for a linux alternative to 123D catch

that just blew my mind.

yay thanks for thoses links, i try differents 3d scanner like david laserscanner and but not open-sources, i just git-it a try under windows and it’s working, :smiley:

how did you manage to install the install PyQt-Py2.6-x64-gpl-xxx??? i don´t understand how to install this, and it is requisit to run PPT, please explain this to me :slight_smile:

you must click here

just found another opensource software for 3d scaning

Thanks for sharing! This looks very interesting


Someone knows an explanation or tutorial about how the set of photos should be taken?

I really admire the ATOR team and follow their work closely.
I wish there was a more controlled method than the “million pictures -> billion points” workflow though.

insight3d has a very nice approach where you can “build” your own geometry based on where it is in multiple pictures (you do not need to film the object which can be very cumbersome). You just “pick” points on the photgraph and it will corelate them in 3D. Then you can project the texture in the same go.
Unfortunately insight3D has not moved for a very long time.

I would love a good workflow for this kind of work as it is very important with good and precise reference models when doing client arch work.
I quess it could be done somehow in the clipeditor if the matching engine behind blender is capable of using different pictures…?

nice software, tried with the kermit example, but result is not that great :frowning:

@Yafu: this info is taken from paid software but should be the same guidelines to take a good set of photos:

Photography tips The best Cubify Captures start with the right photos. These tips will help you understand how to take photos can use to best advantage. We highly recommend reviewing these tips before capturing images to ensure the best possible outcome.

  • Take lots of images with about 50% or more overlap from one image to the next. We recommend 30-50 images from high and low angles if you want to get all the way around your object.

  • Keep the object still during the entire shoot. You should move around the object. Turning or shifting the object will likely result in a poor or failed Capture model.

  • Make sure the object fills the frame. If you want more detail, don’t zoom in, get closer to the object. It is okay to get closer to the object to capture more detail as long as you also have shots from farther away.

  • Your object should have a non-repeating texture. This does not include fur or hair, those are very difficult to digitally reconstruct well.

  • If the subject doesn’t have much texture, put it on a textured background (newsprint works really well). You can edit the newsprint out in your 3D model.

  • Make sure the image is sharp, not blurry. This means using a wider angle lens (not zoom) and making sure that there is plenty of light. Low light photos/videos tend to be very noisy and/or blurry and usually result in a poor or failed Capture model.

  • If you do not have a good familiarity with your camera settings, just leave it on Auto. If you’re using the non-Auto settings on your camera, keep the camera settings the same throughout your entire shoot.

  • Keep the flash off. Flash creates inconsistent shadows from photo to photo and usually results in a poor or failed Capture model.

  • Upload the original images/video to Cubify Capture. No need to resize, compress, or crop them, Cubify Capture will take care of that.
    [/LIST], gracias!

That was the explanation I needed. Now, I’ll see if I can get something with these programs.

@YAFU jaja de nada :), so did it worked for you?

I get an error message saying:

“Taberror: Inconsistent use of tabs and spaces in indentation”

This software is completely broken, it does not work. I get all sorts of errors, bot TabError and Syntax Errors.

@philosopher - it seems unlikely indentation errors would get into a release, but not even sure what application you are referring to?