I use a motorized turntable and record a iPhone video. Photocatch does a great job like that. You can even change the object orientation halfway through the video and edit the clips into one video then process in photocatch. Super easy and quick workflow
are you snapping pics with the drone or recording video? If recording pics, are you taking them manually or are the snaps automated?
Because I fly in tight spaces and there are trees I fly manual and make photos manual.
when digitising a house I can fly freely around I used to use the automated system
There is a making of the Force Awakens holographic chess set out there that shows the original models being photogrammetry scanned by Phil Tippets team using a locked down camera and the models on a potters wheel turn table.
Regarding texture re-projection with meshes made from Instant Meshes and other types of retopo for scans.
You should easily be able to re-project textures on an optimised version of the scan mesh using Instant Meshes. It just needs to be UV unwrapped of course. I use Blender to UV unwrap all of my optimised scans and UV Packmaster Pro to refine the packing and it handles them fine. Although my optimised scans designed as archive models are still very high poly but the Blender UV unwrap is fully capable of dealing with them on it’s own.
I use a ZBrush workflow for most my scanning work and to split the optimised mesh for UV I use painted poly groups. But this painted vertex group splitting method can also be used in Blender too. A good workflow with Instant Meshes would probably be to clean the raw scan mesh up using Blender sculpt tools and modelling tools. Then run that cleaned up sculpt room version through Instant Meshes to optimise the geometry. Then split this mesh and UV unwrap. Then import this new refined retopo and UV unwrapped mesh into your original scan file scene within the scanning app. Then re-project the cameras to get a new tidy texture on your new tidy clean mesh. Most 3D scanning apps have a feature to do this.
when you say “re-import” into scanning app, which app do you mean?
I edited the end of my original message for better clarity. Hope that’s ok.
Very Sorry … Not re-import
I meant … import … the Re-Topologised version of the mesh with good and clear UV coordinates.
Most 3D scanning apps are set up to do this so you can re-project the images back on to texture them.
I had other things on today while writing and was a bit distracted. Guess I should have left it until later before replying. Lesson learned.
im using photogrammetry apps on mac , photocatch and 3d scanner app. Im not sure if they offer this functionality.
Sorry you asked about the scanning app as well. I use Zephyr 3D.
Check to be sure they don’t offer this function as I would have thought that this is a basic and vital workflow functionality.
If they don’t have it then they should really include it soon at some point.
Otherwise all is not lost. I guess it’s got to be good old xNormal to the rescue to copy the texture from the raw scan mesh to the retopo mesh.
I just saw this fascinating new Creative Shrimp course via the Ask NK Youtube channel. It seemed very applicable to this thread and many of the questions being asked. A Photogrammetry asset building and refinement course all within Blender. A lot of it looks quite simillar to the theoretical Blender working process I sketched out in the earlier post. Using the sculpture tools to refine and clean up etc and creating new clean topology from it.
This is the missing link. There seems to be very little info on this part of the process. I know because I was looking for this sort of thing when I started building photogrammetry assets a few years back and there wasn’t very much out there at all at the time.
They are using Reality Capture but of course any photogrammetry app could be used with it. Once the data is captured, it’s captured. If the scanning app does not have the facility to re-project textures then Xnormal can be used to project the original texture directly between the meshes.
Creative Shrimp : Photogrammetry Course: Photoreal 3d With Blender And Reality Capture is Out!
Doesn’t any serious 3D scanning App have the ability to optimize the mesh and reproject the images to rebuild the textures? I am asking because this is not even a problem I ever thought about - never came across any issue after adjusting the mesh in Metashape.
Too bad they that did not use metashape which is max/pc or mesh room.
I might buy their video just to see how they deal with the textures and color accuracy.
Yes I was like ooh sounds interesting till I saw reality capture
Like you I would have preferred metashape.
I agree. I thought all or at least most serious photogrammetry apps would have this feature. I was responding to an earlier post from Remyglass when suggesting the XNormal option. I don’t personally know much about these apps or how fully featured they are.
It looks from the video they use Meshroom a bit as well. So it’s not all Reality Capture in the course. But surely any good quality photogrammetry app would work with these lessons ?
Anyway great to see Blender being promoted here
It is super for this
I use scanning well since ages for my work
reality capture can do that…well technically its just decimating the mesh and reprojecting. Its still a mess of triangles…not actual retopo like instant meshes does.
found a nice tutorial on reprojecting textures in blender after instant meshes by Peter France
Out of curiosity why would u use instant mesh to retopo a scanned object ?
Would that sofern the model too much ?