I can take a model with nicely unwrapped UVs (a scanned model), but then make some alterations to it with destructive mesh operations (like dyntopo or remesh).
This means that original UVs will be destructed, so a better way to retain the original texture mapping from original and transfer it to the new model is needed.
I have many techniques in mind but I still require some opinions about pros and cons:
bake from original model to the new: models might overlap (have similar geometry) about 80% so it means that only about 20% of corrections will be made to the textures which is good ratio. this is a good approach that it will give most results with the least amount of movements.
environment material: (???) I watched a technique by someone that used HDR environment textures as blueprints so they sculpt their models right into the image. It takes no UV coordinates at all since everything is camera based. I wonder if in some way you can bake a reversed environment texture (from original model) and then project it to the new model. This will be the ideal solution.
texture projection: if everything fails I will have placed 10 cameras around the scene, get a screenshot from one, load as texture, paint project to the new model (stencil), then repeat for all cameras. this is the worst case scenario with the most movements and most labor but at least it works.