I’m pretty sure that at least one of you already had the same idea. I’m about to test the following idea next weekend:
Imagine you have a real actor. You film his head face-on with a video camera while he has reflective marker points sticking on his face. Some around the mouth, eyes, nose, chin, forehead, you name it. Now, he starts acting his face while keeping the head still and you record the face with the marker points.
Next, you import the video to blender and start tracking each and every marker point. I assume that tomato can do this as a 2d tracking task.
Now my questions:
- Do I need some markers to stabilize the video in order to compensate for unwanted head movement? (I bet I do need them, don’t I?)
- How can I translate the tracked markers to my face controllers of my 3d character?
- Will the tracking process give me empties for each tracked markerpoint?
- Could I parent my face controller bones to these empties?
You get the idea…
Any suggestions/ideas/comments on this? Maybe someone already has a pipeline for this kind of face tracking? Any thoughts are highly appreciated!