Creating a 3D model with the use of sync'd multi-angle video?

I’m looking to produce video of a person standing on a hill top while a camera pans around 360’, showing off all the scenery. The person needs to be able to move.

after having a think about the multiple ways this could be filmed / achieved in post production, I came up with an idea:
film a person in front of green screen then track that image sequence to a my own 3D environment (which will be camera mapped from photos). But obviously, if I wanted to pan 360’ in my 3D environment, I’d have to pan the camera around my subject in front of the green screen and match this up - something I’m thinking wouldn’t be feasible in my project.
is it possible to film someone in front of a green screen from multiple (say front, back and side) static angles, subtract the green screen, use an add-on like ‘Blam’ to work out the geometry of the space it was filmed in, then project the separate film streams… But instead of projecting it onto a model of a person, UV mapping the person, rigging character, etc etc, I wondered if blender could work out where the image ‘projected’ from one camera collides with a image from another ‘projection’, blender could somehow automatically create a 3D model of someone that would be (more or less) rotatable - meaning the model of the person could be placed in my 3D landscape and panoramad as much as I fancy…

does any of that make sense, and am I barking up the wrong tree - is there a simpler way I’m missing?!
thanks in advance for any wisdom.

Ask S. Köenig, what you need track blend match https://cloud.blender.org/p/track-match-blend-2/#5604153b044a2a00cd8ed37c
Youtube is friend.