Face capture test

Watched the video tutorial finally.
I think this is the exact method they used in Avatar. Even calibrating too as you explain. Seems the way the did it too.

I see all these huge points in the guy in your video and also in Tron Legacy and also in Avatar and I wonder how nobody uses the typical “cinta de fontanero” that comes in red, blue, black. You just cut little, very little squares (not circles to track better!) and then atach them to your skin and if these markers are smaller it is better than bigger to track.
Also, for example in Tron legacy there are no markers on the lips but just in the middle of the zone between the lip and the nose. In your video that guy also has the markers there. For me that is an error. I would put the small markers in the very edge of the surfaces that move most. Is like when modeling. If you want to model something you place your edges where the mountains or valleys are, not in the middle. Here is the same, the markers must be exactly where the motion is bigger. I just watched yesterday Tron Legacy and the making of included. I wondered why the mouth moved “so rare” in the bad guy (Clu) character. And looking to the markers they didn’t have any on the lips.
I am going to search for images and post here.

Avatar movements were near perfect. I was wrong that they were not using markers on the edge of the lips. They did!
Seems they used some green powder. I think my solution with red tape in little squares would be better to track for faces.

And I was right with Tron Legacy. This is wrong. You don’t see markers to track on the edge of the lips. And that is essential. That is why then the bad guy mouth never looks right when moving:

I guess that’s why they shoot with face cams mounted to head gear. The problem with witness non camera (where face may not be head on) is y rotation, where side of face may become occluded when the actor turns away.

A solution could use 2 cameras at 90deg from each other, where the performer remains roughly in the centre. You then take a nose track for both, join them and use that for y axis merge. This should give a sort of UV unwrap of the face including both sides.

It suffices to say the guy to look straight to the camera. Obviously the head will be moving in X and Y so this must be “calibrated” too this way:

The nose has three markers and after the tracking of the video you create a new bone in the middle of these three trackers that is the average of the three for all the movement. I will call this new bone: origin.
Then for each marker the x real position would be: marker_x - (actual_origin_x - original_origin_x)

So imagine the marker moves in x 23 and origin moves 10 compared with origin first (original) position in the video (in the rest face position the video would have to start with). That means that the whole face moved 10 so our marker is really moving only 13
23 - (10 - 0)

Of course the zero above would be not a zero but a x position but I put it that way to be more clear.
Same for Y.

And the guy must avoid to rotate the head, he must to try always facing the camera and try his eyebrowns to be always in parallel line respect to the ground. If he does a rotation of the head then the data would be not accurate. If that happens then first the video must be rotated back before tracking. Markers in the nose and ears where they never move, would indicate such rotation. It would be great one day you put these markers and blender corrects the video rotating automatically trying the markers to be always in the same exact place. While this can be done in AfterEffects for example with the stabilization thing there.

Hmmm, telling actors not to move. Ok.

Not to rotate or you will be unable to use this way by militoarg and would have to begin to code instead to correct the rotation. So it is about doing it easy without coding or let the actors move all around the room and even don’t see the markers on the face for several frames, but then you will have to code a whole program instead using militoarg way.

I guess you could easily slap a phone camera or similar on an arm hanging from their head.

Agreed, really good results. With slider system, markers on the edge of the lips, would be useful, too.

That´s a really good idea!

I came up with another solution for head´s compensation: tracking 3 points (a triangle) on the nose, and add a driver expression (trigonometry) for all bones in order to get a realtime compensation. It is theorical by now, but I think it could work. It is basically the same concept of Bao2 but with rotation recognition, it is complex but I think it can be done.

I had a great idea for more efficient track points : styrofoam balls. they are spherical and can be easily track, I’ll try to do something with this tomorrow.

I think markers must be rectangles not spheres because the way the “recognition” is done in the tracker software: they like contrast like corners. Imagine you are only viewing a little part of a sphere, it seems always the same. Instead if you are looking to a corner and then the corner disappears and you are only viewing the edge you know there was movement. So it is: is much easier for the software to track movement if corners are present. We need Sergey here to explain it better or if I am wrong at all but I am sure with red/blue/black tape cut in little squares the tracking would be to the milimeter. And faces are that way: a change of 1 mm is noticeable and means things to the viewer!!!

Obviously the tracking point would be one corner of the square (better using always same corner in all the squares)

Your opinion is well recieved, great information. I’ll try with spheres and little black squares, too, it is the only way to know which one is the best option. I’ll publish some test as soon as possible.

Try doing very little movements like when a person is for example in front of someone he doesn’t like and he does a movement very small on the face that shows he doesn’t like that guy but the movement is not so obvious so the person notices that guy is thinking that about me. Those subtle movements I think will not be registered using spheres.

You were hell right! I tested rhombus black shapes with LocScale tracker setting. I got much more precision on input data. I’ll publish the test after I get the head compensation movement applied, I’m on that :wink: see you around!!!

Now in next movie you will see in the “making of” all these actors with romboidal shapes and you and me will know who really deserves the oscar to breakthrough innovations that push the envelope :stuck_out_tongue: (and the oscar goes to: millito!)

I am having some troubles sending location from empties to bones. If this works, the pipeline will chance a lot, because you will have to bake the empties for getting that location data to compensate the head movement.

Head compensation test:

Seems to be working well. This method of tracking the face to drive a CG face is to be used with no rotations or translations of the actor face. If he does he will need to eliminate such movements so it is up to him. Doing it automatically never will be perfect, there are a lot of maths and camera distortion and such. Just keep the face without rotation and the slight translations can be eliminated with this method. Do big movements with the head and you better start paying some developper that creates a new software to make magic, that they didn’t even did in Avatar, where a fixed camera pointed to the face was to get rid of rotation and translations of the head.

I see you are placing the track point in the middle of the rombus. I think the correct would be tracking one corner on the rombus.

so, I’ll try the corner I think :slight_smile:

about the headcompensation, I think I am going to stop with that by now, I would love to produce a little animation with Sintel and this stuff. I was thinking about something fun, like a ridiculous small videoclip, with some dancing and singing “like a virgin” or " I say a little prayer for you", suggestions accepted jaja

Something not copyright protected or you video will get pulled down. Great idea though, can you get decent micro movements ?

Thanks for the explanation.

So are the white and dark track points just a form of makeup on your face before you shoot the footage?