Free 3D Photogrammetry with MacOS Monterey on M1 + Intel chips

wow that is a few thanks I will have a look.

was planning to do a few tests too but I think this will hep me take some decisions :slight_smile:

Another interesting aspect of high MP cameras is the digital zoom one could have.

Macro cameras are traditionally on android phone junk like 5 to 2 MP images lol

With a 108MP main camera you can crop / digitally zoom and this way photograph a small object but fill the image with it!

Small details tend to be very tricky, I did try a small pine tree in a pot with some flowers, that did not work to well. That was with only 16 MP however and 12 MP.

Just saw this, wow
https://www.guru3d.com/news-story/samsung-unveils-isocell-image-sensor-with-industry’s-smallest-56μm-pixel-(200mp).html

For that I assume I need a macro lenses

Kinda funny how Samsung announces the sensors yet does not use them.

Seems Sony is still far ahead.

I decided to buy this https://blendermarket.com/products/photogrammetry-course

He makes some interesting points about what camera to chose and some I expected.

If I want to make it short he recommends decent resolution and good “spacial resolution” (which is optics, iso, …) and shooting raw.

oh yeah the better the sensor and optics the better the images the software can work with.

my main reason for the tests I am doing is that students don’t have access too that.

I am actually after years of teaching and flooding youtube with my lecture videos won’t to step away from it and focus more on project based videos for the belndermarket.

The first topic will be about photogrammetry but with generic equipment digitizing objects and spaces for industrial game asset and interior/archietcture design modeling.

So drone photogrammetry, LiDAR assisten photogrammetry etc will be part of it.

My current only concern is how long will just a lecture valid / updated with software changing so fast.

I think we reached the eclipse of Apples LiDAR - there isn’t any new better sensor coming so it is save to use it now.

or I go printed book with video tutorials included - this is easy to update

yes true good point and everyone has a mobile phone. I am still crossing my fingers for Apple to finally up the resolution of the camera but I have no real hope that they will change or update Lidar.

If you come across a good solution to use 360 photos other then Metashape, which seems interesting from what I have seen but I don’t want to buy the professional version, please let me know :wink:

I would not call Zephyr a solution from the results I have seen.

I am curious about it as I will probably get a 360 camera to make HDRIs.

I never realized before today that you could capture a room with photogrammetry and make it have hdri textures that then will light the objects in it. Not sure however if it needs to be done with Reality capture (perhaps they will explain at a later point).

Yea actually you could do a quick scan - even a lame LiDAR scan - for reflecting the env this will be fine.

Actually good idea didn’t even think about it. Better than rebuilding or photoshopping a rendering onto a photo

But for IBL i am not sure if photogrammetry can produce the hdri texture you need

Yes good idea too.

The way I understood the quick explanation is that instead of taking normal photos for the photogrammetry you take a bracketed exposure, turn those into HDRI in exr format and then do the photogrammetry.

What I like about your idea is if I would do that and create and HDRI with a 360 camera I could have a similar effect, as long as as background does not need to be perfect :thinking:

I am not sure which one is more difficult to do. I might try both however.

Not sure if Photocatch or Metashape support EXR as input however.

@cekuhnen have you tried instant NERFs?

I have trouble taking selfies to scan my head, and often taking even basic photogrammetry of shiny, reflective or transparent objects is near impossible.

No I have not tried it - but I read about it

But when 2D image AI fails to upscale an image I am curious how it can do 3D

Small test, 1 turnaround, 25 pics for 12 mega pixels, 26 pics for 48 mega pixels raw, raw converted with default setting and applying the apple raw profile in Lightroom.

Capturing 12 mega pixels feels instant, 48 mega pixels has some processing time or “save time” not sure which one it is.

Pictures are exactly the same position as the iPhone is on a tripod and the shoe on a turntable.

12 mega pixel jpgs are 2-2,7 MB
48 mega pixel raw are 47,2 - 57,7 MB (12.2 - 15.8 MB for yes exported lighroom jpgs).

Their 12 mega pixels is so good on the 14 that the difference is quite small, or at least in Photocatch.
However looks like they sharpen quite a bit on 12 mega pixels perhaps a bit too much.

Strange side effect is and I did try a few photosets, Photocatch seem do give me holes in the 48 max more often.



12 megapixel texture

48 mega pixel texture

2 Likes

This is odd

My iPhone 12 Pro Max 3D models are a lot softer than the Xiaomi Note 10 Pro 108MP models

48 mp is on the middle and should show more details !

Can you share the 12 mp and 48MP raw photos ?

sure here you go.

Setup is not perfect should have tried a more uniform background.

Will see if I can change it a bit, Metashape has a horrible time with the pics.

Also thing f 1.78 might not be the best move here, with a “real” camera I probably would have use something else to get the full shoe in focus.

Also wish they had implemented 8k video… could have been interesting.

Thank uiu for sharing

When the baby sleeps I will take a look at it.

I stoped using metashape - object capture is fine now

I use also the 3D Scanner App on the .Mac

Btw the 250$ Xiaomi 108mp photos made better 3D models than our 1700$ canon 24 MP camera

I found this stunning how well cheap smartphones can produce good work

I also shoot in overcast weather mainly or in my studio where I have three led tracing tables setup like area lights :wink:

2 Likes

good point I will post the 2x test which is a crop of the 48 mega pixel there (scratch that the 2x does not make a difference it seems as it is still only 12 mega pixel).

What focal length does that have ?

Led tracing tables, the kind you can use for 2d hand drawn animation?

starting to thing the issue might be, me not using raw in PhotoCapture.

Ah well will leave this here to for other that fallow this thread.

Lets call it user error, raw shows quite a big difference

And the blend file because why not.

Update: more that one rotation angle helps but if feels like Photocatch ignored some pics as I still don’t have a sole.

left 48 mega pixels with more photos. Starting to be decently happy with that :slight_smile:

Oh hahaha uiu did not use RAW mode … :wink:

Yeah never made much of a difference before so I did not think of it.

Nice light setup thanks for the idea.

Are you happy with that turntable?
I was tempted but thought it was quite pricey.

@Tiwaz pricy yes - school bought it

It makes it easier

You can turn in degrees you specify

So my Xiaomi photographs the iPhone controls the turn table

Works well

more pixels better mesh detail no matter how expensive or cheap the camera actually is

some of the shoe photos had blurry focus areas - you need to be careful about this