Anyone know what method is used in this video?
WHat do you mean? They use Syntheyes. If you dont want to buy it, maybe look into 123D catch or such from Auotdesk.
yes, but I take them as reference points or cameras into the blender to make projection textures with high quality?
oh no no
aps like 123d catch or so measure in differences in images and calculate the 3d mesh
afterwards you end up with a textured 3d model.
Here some more of these kinds of software
Some free versions (there are more)
- Libmv (now part of blender, used in camera motion tracking currently)
- There are also some mobile apps, their quality is foten a bit low, but chances are you got your phone always with you.
This one has a 30 day trial and Linux version.
Gleb Alexandrov made a little tutorial.
The method in the video has nothing to do with photogrammetry. Its a 3D Track with some Zero weighted trackers as reference for modeling and then modeled by hand. The textures can be projection painted directly on the mesh since you have the camera inside blender too. You can do everything directly in blender, no need for additional software.
i think this will help u
Using ‘Zero Weighted Tracks’ in Blender :yes:
Pipeline totally free:
- Python Photogrammetry Toolbox
- Cloude Compare
It is this process that still did not understand and I’m looking for tutorials to practice.
You can also look into visualSFM for generating a sparse or dense point cloud
I did a quick search for a tutorial but couldn’t find one for this particular workflow but I’m sure there is one for each step of the process.
Is it a particular step that is no clear for you?