Free 3D Photogrammetry with MacOS Monterey on M1 + Intel chips

I’m not sure (I’m on my Linux laptop right now); it’s a 2018 Macbook Pro, so Intel/AMD I think.

Image upscaling does not always result into finer meshes
But it does something else: Left 430 MB Right 15 MB file size lol

Upscale done with Pixelmator Pro ML Super Res.

Interesting is how the upscale images show more of the ground

2 Likes

I am curious about how many images are too many or how many are too few. I was on set and did a quick and dirty shoot of the actor running the camera every which way around them. My understanding is that I bring it into an NLE, speed up the footage so that what is left is 1 frame out of every ### and then export that as an image sequence. My question is, how much do I speed up the footage by? 200%? 500%? The more the footage is sped up and the less images in the image sequence I end up with, but obviously if I just feed the software hundreds of frames that would be overkill as well.

Any thoughts?

I don’t like the film approach because

of motion blurt
you don’t hand capture what’s needed
4K Frames Are smaller than photos

For amount of images

That depends

First size of image is most important
More pixels fiber mesh details

Second simple objects need less
Complex details with overhangs etc more

Also think if texture capture

An ottoman you can get with 8 images from a low and 8 images from a top angle

This way you can see each cube face more perpendicular to capture textures less distorded

I am actually working on a lecture for this right now

Will be in YouTube next week

1 Like

Yeah, I agree about taking photos but there simply wasn’t enough time. Being on set and trying to carve out even more than 5 minutes while everyone looks at you like a weirdo who is holding up the shoot is next to impossible.

In order to limit motion blur, I set my shutter angle to 90º which helps, 45 might have been even better but I would have lost too much light.

Fundamentally what I really need and hope that I captured well enough was the face of the actress in make up, anything else (body, hair) I can probably just replicate using existing CG elements.

2 Likes

Hi Midphase

I’ve not been using this new Mac system but the historic archive scans I am working on will often average about 200 to 500 high resolution images per session capture. Most of these are large composite scans made up of many separate scanning sessions using this sort of number of images. But it depends how precise and high resolution you need the actual raw scan geo to be. If it’s not a historical archive or academic scan then a lot can be captured in the texture and re applied to the finished scan asset in various ways. Either as a bump or normal map of course or a projection mask for sculpting finer details directly.

It’s easier of course and normally fewer images if it is a well lit table top capture. Much harder if it is outdoors or not in a controlled setting.

I think for me if it was a reasonably detailed close up scan of an actor. For a face and head I would aim for minimum between 80 to 100 images in a reasonably high resolution to be fairly safe. For a whole actor about 300. That should be a good base number to aim for.

The first experience I actually had of photogrammetry was when we were trying to get a good scan of an actor from similar sounding video and frame grab session to yours. It was video obviously for the same reasons you describe. Results were very rough. It needed a ton of clean up and re sculpting in Zbrush to the extent it couldn’t really be called a scan any more. But this was few years back and capturing scans from video is far more common now.

The image grabbing from speeded up video seems a good work around. But most scanning apps have tools for this as well and also the ability when frame grabbing to filter out images that are very close in similarity.

1 Like

Toka we should connect - i work on similar things with henry form museum

Mid phase more images can be specifically also when showing far away and close up shots

Most times my usual projects are 100 to 200 images easy

1 Like

Hey sure thing,

I’ve been working a lot on some very large sculptures in the open air in a quite difficult overgrown often watery environment and in all sorts of different weather and light. So a lot why I’ve needed so many multiple scanning sessions and image captures.

It’s always a continual leaning process though. We are all just working it out all the time I think. So much has just seemed to depend on getting the feel for it. And of course the tech always keeps developing too to constantly keep us on our toes.

This has been a fascinating thread. Thank you.

So you also use drones ?

I was shocked to see the difference between my DIJ spark (don’t judge me) and the DIJ Marvic Air with us 48MP camera.

Areal models are so much more detailed

I also mixed drone photos with iPhone photos

Worked well

Ultra wide angle lense images add issues however

Hey yes. Some focused drone work of course. We used a Phantom 4 mostly a few years back. But the majority of the time it’s been slow and painstaking close up hand held and a camera on a stick method. :slight_smile: I’ve been needing to get super tight and close up with all the surfaces so need a lot of image coverage. In the end you have to make a judgment call on how detailed it’s practical to go. I have taken it as far as I could, but there really is no real limit of course bar the truely microscopic. More recently I’ve needed to be waist deep in freezing muddy water to capture some of these guys. Quite nerve wracking while holding an expensive camera.

I have one video of a part of the work already up here as I made a video and animation about it in Blender. It started as an experiment a few years ago but expanded to become this huge thing. Quite literally. A continual learning process and an all consuming labour of love.

I’m also using scanning extensively all the time now in animation and other work too. I mentioned it in an earlier comment here. But what inspires me so much about it is how it breaks down the dividing lines completely between CGI and our lived in reality. It opens all the doors and windows and brings back in real nature and the truly tactile. If you start making prints and CnC from it too then it feels a bit like magic.

1 Like

The video approach is quite nice too btw

the photocatch app gives you a frame preview

And I think Apples API with its build in auto masking is not ideal for floating objects. It often cuts off the ground aka in this case demanding the bottom of the ottoman.

And sometimes it works very well - she shoe 3 scans above in the thread where stitched FLAWLESS !!!

8 Images fails - so something like 16 is a min start but it shows lack of mesh detail.
Doubling that already increases detail a lot.
Going with 150 makes more mesh but not super more detail from that distance.

These results look very nice. Perfect for medium- to long-distance renders. @cekuhnen Have you tried it on people yet?

48MP photogrammetry is the bomb
Marvic Air 2

Bricks and everything!

2 Likes

@claus Humans are very easy as long as thy can stand still.

to digitize humans best is also a very soft even lighting so they can be illuminated well in arch viz.

That’s crazy, how big is that model? Polygons and all? Curious about the mesh of what this capture thingy is doing

Photogrammetry is wonderful and impressive tech and an industry revolution. But the real human hard work comes when working with all this dense raw scan data and making it into refined high quality 3D assets. The sort of mesh that a photogrammetry app will generate is seldom any good for most other uses and must normally be substantially cleaned up optimised, reworked and retopologiesd depending on it’s intended use.

1 Like

@Toka
Yeah you can see that along the house wall 2nd floor.

B like BIG. Those are often 1 GB scans.

@Toka just added it.
Here is the URL: https://drive.google.com/file/d/1sEm0sBGQZGiInba6j-c-BAthk5XCB-kS/view?usp=sharing

1 Like

Interesting also how PhotoCatch trims and Metahsape does not.

For areal drone work photocatch seems not to be ideal

photocatch

metashape

Which software gives you the best results in your opinion?

In Metashape I use mesh from depthmap and not dense point cloud.

This is a much faster process but can lack in details.

photocatch RAW is similar to Metashape High Quality while metashape meshes are denser but also show more noise

Photocatch struggles with trimming - removing parts
but is able to stitch the meshes perfectly together when making photos of flipped objects (shoe top down)

Metashape can make more sense but noisier meshes and has no issue with background trimming.

I do have to say because Photocatch is as fast as Metashape with two to 1070ti and silent I prefer it also because it can automask images so you can photograph an object like a chair sitting normal or upside down and get one stitches mesh back.