Facial Motion Capture / Tracking

I was very impressed with this video I just saw on YouTube. It was made by a Blenderhead and indie filmmaker/animator whose YouTube channel is called DreamingJapan.
I’ve seen a few homebaked attempts at Blender facial mocap before, but this is the best one I’ve seen so far. And it seems to work the best, too. As there were a few people in the comments on this vid begging for a tutorial, I wrote him a short letter asking if he’d do a tutorial too. So far I haven’t heard from him, but it’s early on Sunday morning, and his vid is a month old…
So, I started this thread to show the process to you guys, and to see if any of you have ideas about how this would work. He does go into fairly good detail about the process in the vid, and it doesn’t seem terribly complicated. However, I’ve not yet delved into Blender’s new Tomato tracker, so I’d rather hear from you all about your thoughts. But, after watching the video, my feeling is that Tomato would probably make this pretty easy. At least that’s my guess.
The nice thing about this is that, not only does it give an animator the ability to give animated characters some actual acting chops without the animator having to terrorize them manually, but it also instantly solves one of the hardest things for animators: character speech and lipsynch.
Like DreamingJapan, I’m planning my own animated scifi film (yes, I’m writing the script now. Yes, I AM WRITING IT. It WILL (eventually) get written. Damn it.) I’d really like to stop obsessing about whether techniques like this can work, and just concentrate on writing a script. I’ve done a little character mocap before, but the idea of character lipsynching terrifies me. I’d love it if we could all put our heads together, get in here the folks who have recently gotten good at the new camera tracker, and come up with a nice, easy, simple, foolproof, and clean method for Blender Facial Mocap, that any Blenderhead can do in their bedroom, with a cam, a computer, and some little bits of DayGlo tape.
And then one of us should do a GREAT tutorial on it. :slight_smile: (I nominate Andrew Price, if no other fearless knight of the Blenderdom should step forth in his sted…)

https://youtu.be/VLrnIH4Y8Cs

http://www.blender3d.org/e-shop/images/TrackMatchBlend_Previewsite/ has a tutorial about capturing facial deformation

Thank you much, Richard. That one looks good, and to the point. I’m going to have to come back and study it deeper after I wake up. I like the mix they use between video and text.

Here’s another one I saw that blew me away. It highlights something really nice about this approach, the ability of the animator/director/filmmaker to effectively play all characters, relying on actors only for the voice…
(Probably not done in Blender, although I have no idea what software they did use)

That looks like the work of ImageMetrics to me.They use proprietary image processing technology , the information is on their site. They do amazing work and its all markerless.

Hey, MrNexy. Thanks for finding the thread and posting in here. I hope you’ll post a lot more on your technique in the OP and update us on what you’re doing with it now and any changes you’ve made to your technique.
I’m really excited about doing my experiments with this.

As for that vid, it looks like they are an outfit called Pendulum Studios. I went to their site and there was doc on how they did some of the work in Captain America, Thor, and other films.
That’s about as much as I know.
But yeah, it all looks proprietary. Still, it’s amazing that people like you can do similar things with Blender. That’s the point of this thread: to find ways of doing things with Blender in Facial Cap that are only done by pro studios that make tons of money providing this sort of expertise for big-budget hollywood, yet doing on your desktop for your own films. I think it’s an amazing world we’re living in.
And I still haven’t bothered to watch Captain America or Thor. :slight_smile:

Oh wow , you’re right ! I completely forgot about them.

Heres a sample of some of their work(Image Metrics).

and here’s a demonstration of how they do their stuff.

What would really be great is if we could get some functionality in blender , that allows a user to load two clips of the same performance and be able to track them in real time. There are quite a range of applications out there that take this on in real time , however I think that it could be done quite nicely using an adapted version of blender’s camera tracker.

What I did in my video was glue some bright orange paper dots to my face using Telesis 5 Silicone Adhesive (tons friendlier than wood glue or anything similar :wink: ) , then I isolated the red channel so that the markers appear to be glowing bright white.I am only a novice programmer at the moment so there’s no way I can even begin to approach developing an automated system.I would more than happily help fund the development of a feature like this , as would many others I am sure.

MrNexy, there are more than a few programmers on BA. I would be really surprised if something couldn’t be hacked together with the available talent pool at BA. I’d bet some of the mods would even be interested in throwing their two cents in. Also, it seems like the new Mango project might be involved in this sort of thing. I’ve already seen them doing a little cam tracking in some of the MakingOf docs they’ve already put out. I don’t know if they are going to have CG characters in this one, but Facial Mocap certainly seems like something they would want a piece of…
But, I think a tutorial, detailing the exact process you used for your facial tracking would be the way to go. Once that’s up, everyone would understand the nuts and bolts of how it works in Blender. Then it’s a pretty straitforward question of writing working code.
Still, I’m not sure actual programming is really mandatory at this point. If it’s an easy process, and from some of the tuts I’ve been watching on it recently I feel like it isn’t exactly as hard as a moon landing, then we should just establish a quick and easy process, one that’s easily repeatable yet gives very good results.
Here’s the second part of a pretty good tutorial I watched the other day on Facial Mocap. This guy does a pretty good job at explaining all the details of the process, although he does take his time. I think the whole set is three or four vids long. As he says in this, he uses only a few tracking dots (I think he should have many more…) but he does get the main points across. From what I can tell, this must be pretty similar to the way you did your demo. I would have liked to have seen more dots, and I very much would have liked to have seen him apply this to a full Makehuman character, as the mouth mesh he made was more in Muppet valley. But he does get the whole process across. Do you think this is a good example of the technique, or is there a lot more that needs to be said?