How best to simplify complex landscape from photogrametry?

I have a high resolution landscape which was generated through Metashape using drone footage of a localized area. This is a great model but it includes all sorts of detail such as trees, which I would like to remove so that I have a relatively accurate map of only the landscape.

I have been able to produce a nice topo-map using depth data (of course this also shows high density tree locations). Can anyone suggest a good way to reduce the density of the mesh while retaining a close contour of the landscape itself?

I’ve not had much luck with the standard decimation tools, but perhaps that’s a matter of user ignorance. I’ve also tried shrink wrapping a grid as well as cloth simulation, dropping the cloth grid over the ladscape with the thought that it might be easier to smooth out the trees once I have a uniform mesh but I can’t seem to find the right technique. Is this really just a matter of brute force on a very high density mesh?

I would also need to insure that the orthographic UVs are able to transfer properly in order to prevent distortion from the map generated through Metasape.

My best results have been to shrink wrap projecting the mesh grid up into the topology and then smoothing any tree bumps. This does a pretty good job but the UV alignment is a bit off.

Is there a way to map the new mesh based on the UV data of the high res mesh object? They are each orthographically mapped, so If I could have each vertex go to the corresponding vertex in UV space I think that would do the trick.

It is hard to give you a proper solution without seeing your mesh but try this. Create a displacement map from your higres mesh and use the map with a simple plane.