bake video texture on mesh

First I hope I post this in the right part of the forum.

I’m not really sure IF it’s possible, but maybe.
I tracked a modeled head to the head of a person in real video footage. Now I would like to bake the video to the head, so that I get a head texture for every frame, roughly like that
My plan is it to manipulate this pictures and map it back on the model.
Of course I don’t get high quality images and especially not the whole head, my main focus is the face.
Thank you for your time.

Unwrap your modeled head and adjust the uv’s on head texture. then animate.

I guess you don’t understand what I want to achieve. Maybe I explained it wrong. I got real video footage and want to manipulate the face. Therefore I need to project the face in the video on my modeled head to GET a texture like the one shown above.

On your mesh, leave some faces with a separate material in the character face area and apply it the animated video texture. The rest of the head will have the UV unwrapped texture. you will have to make some masks to blend the borders of the face with the rest of the head. I think that the effect you want is something like in this funny series :

Composting background image with facial animation can do the example Roubal pointed out. 3D face that can rotate is a different problem though. Projected image will not have depth you know.

It seems It’s still unclear, what I want to achieve.
In this video you can see someone do pretty much what I want in Nuke.
I explain what they do, so you don’t have to watch it (even if it’s interesting)

  1. They have an actual video-recording of Jet-Li.

  2. They track a 3D-model of his head on his head. And project the video as texture on the model. So they got the part of the video, containing his head as animated texture.

  3. Now they alter this animated texture external

  4. Then they map the altered texture back on the head-model. Now he got mud coming from his eyes.

Of course I don’t expect someone to told me how I achieve this, step by step, but it would be helpful to get a direction.

I tried to figure out which steps are needed for the process.
Of course this is just a test, to see if the whole idea is possible. Actually I had to model the head of the woman and parent it to the track instead of using a “suzanne-mask”. And this is just the workflow for a single image, not for an animation, but if it works, maybe I can write a python script to automate it.

I tracked the moving head of a woman in a video-clip and parented a “suzanne-mask” to it.

I unwrapped suzanne with “Project from View”. And used the same Image from the video-clip as texture. I named this UVMAP01
UV-Image Editor

Now the Suzanne Model has this part of the image as texture, which it covers.

This would be the shadeless Render.

Then I created a second UV-Map, which is “correctly” unwrapped.
I named it UVMAP02

Now the only problem is how do I get the the texture, which I attached with UVMAP01 stretched/converted to UVMAP02?
I’m not sure if blender is able to transfer the texture from one UVMap to another without affecting the position of the texture.

Hehe it seems I’m the only one, which posts here. However, maybe someone can use this informations someday.

I found the solution:
I choosed UVmap02 (the correct unwrapped) as Active UVmap and UVmap01 (the project from view one) as Rendered UVmap and bake the textures for the object. This is what I got from this step.

Now I have to model a real head and the fun part: learning enough python to write a script, which automates this process, to use it for animations. I guess I have to move on in another part of the forum.

It looks like the method I used for this video that I made in the very beginning of my “Blender life”, for the birth of my niece in 2004. But if I remember well, in that time I didn’t knew yet how to UV Map, and the mapping was from view or sticky… It was with Blender 2.31 or something like that.

For the pilot face, I mapped a low resolution video of my own face onto a simple potatoe head :

not bad for blender 2.31