Free 3D Photogrammetry with MacOS Monterey on M1 + Intel chips

I also have to see how this works when supplementing the iPhone / DSLR photos with Drone photos too.

I would love to see how something like this handles the resulting scans:

1 Like

In my application, if I decimate the mesh in Blender while preserving most of the needed geometry details and UV texturing.

I use the scans a lot as a 3D blueprint to model over.


Do you call the by hand or so u only use photo match app?

Object Capture does handle outdoor scenes ok with less image passes.
Below is 12 images.

BTW shot with a 48 MP drone camera. Good to know that it can deal with large images !!!

OBJ file is 150 MB :wink:


That’s a very nice result from outdoor at only 12 images. It would be interesting to know how the geometry matching and merging would work on large scan projects of thousands of images.

The democratisation of photogrammetry scanning feels like a revolution similar to me to the advent of mass photography. And like a photograph it never actually goes out of date and stays as a record of a moment in time. So although the technology will keep changing and moving a scan never really looses it’s value. I am realizing this especially now with the historic archive and fine art work I have recently been involved in.

I also love how it blurs the line between CGI and reality. It allows captured realty to be taken into a 3D environment and used there with all of it’s advantages. So much potential there and it is so freeing. All the time I have been involved in CGI I have tried to keep one foot planted in nature and the tactile and never to loose touch with it. I think this sort of scanning now throws open all the doors and windows and competely breaks down these barriers.

1 Like

This is the main issue with Metashape and other softwares, for example RealityCapture, which I personally find (but I’m not the only) much accurate in reconstruction and (it is) much faster. Many users have been facing the problem of top/bottom fine aligning, which in both softwares have to be manual by creating masks (very boring activity for huge chunks of photos) or placing control points and things like these. I think in RC they’ve now added a way to create masks automatically, but I’m not sure, and anyway it doesn’t completely eliminate the problem of aligning the point cloud generated to have a right model. This is totally game changing

1 Like

What is the processing speed like for the final construction?

Are you sure this API requires an M1 chip? I checked but couldn’t find any references to that on

Edit: I’m in the process of upgrading my Intel Macbook to Monterey so I should be able to check within a few hours…

1 Like

I don’t think it requires the M1 chip. I just think it’s one of the new features in Monterey. I’m sure the M1 chips help though.

I was wrong - it does work with Intel CPU but requires macOS 12

However I read that on M1 chips it runs significantly faster !

“ You’ll have an object rendered in a USDZ file that can be shared with other iPhone and iPad users for AR interactions, or even imported into other apps like Cinema 4D. It’s worth noting that since the app based on Apple’s API, PhotoCatch for macOS requires an Intel Mac with 16GB RAM and an AMD GPU of at least 4GB VRAM, or any Mac with the M1 chip.”

Bart since you know coding - I found photo catch having issues exporting as OBJ sometimes.
The save dialog does not show up and Blender does not support USDZ

1 Like

Here’s my first result - I shot a very lazy video of about 10 seconds and ran it through PC on medium settings. Processing time was a few minutes. I’ll repeat this later by shooting photographs instead - I feel the video had a lot of blur, which reduced the result. It was fast and easy though :slight_smile:

Yeah I’m sure - I hear they have excellent performance.

I’m not sure what I can do about that? :slight_smile:


I shared this in case you would like to make your own app.

Here’s the result of the same object reconstructed from photographs. The geometry is a quite a bit better (press I, then select ‘Matcap’ in the viewer), and the textures are a LOT sharper. The reconstruction was a little slower though.

Ah I see. I’m not planning to make my own app for this :slight_smile:


Wow that looks really good. I think @cekuhnen mentioned with Apple’s PG app you can flip the object I.e. to shoot the bottom and the app will stitch it together automagically.

1 Like

I did try that, but it turned the model into a mess.

1 Like


I had hit n miss results

Shoes and furniture worked fantastic

1 Like

Object Capture is also fine with using images from iPhone and Drone.

Areal photos done with a DIJ Marvic Air 2 48 MP
Ground level photos done with iPhone 21 Pro 12 MP

And areal fly overs with close up drone passes work fine too.
Always funny to see how people react to a drone flying :wink:

Ridiculous that you can count the roof shingles or bricks

The scan shows lots of blurry wall details - the drone was still moving when I snaped the photo!


What cpu and gpu do you have in you macbook? I am trying to figure out whats compatible. I know intel cpu and 4gb amd card but that seems vague. Thanks