I tried to figure out which steps are needed for the process.
Of course this is just a test, to see if the whole idea is possible. Actually I had to model the head of the woman and parent it to the track instead of using a “suzanne-mask”. And this is just the workflow for a single image, not for an animation, but if it works, maybe I can write a python script to automate it.
I tracked the moving head of a woman in a video-clip and parented a “suzanne-mask” to it.
I unwrapped suzanne with “Project from View”. And used the same Image from the video-clip as texture. I named this UVMAP01
Now the Suzanne Model has this part of the image as texture, which it covers.
This would be the shadeless Render.
Then I created a second UV-Map, which is “correctly” unwrapped.
I named it UVMAP02
Now the only problem is how do I get the the texture, which I attached with UVMAP01 stretched/converted to UVMAP02?
I’m not sure if blender is able to transfer the texture from one UVMap to another without affecting the position of the texture.