Position-only Face (or Body) Rigging


TL;DR: I’d like to animate a mesh using a list of 3D mesh vertices (known) locations.

I have an ordered list of 3D points, with their associated 3D position. These points represent either facial landmarks extracted from a human head, or joints captured from a human body. As such, these 3D points locations represent facial emotions or body poses. (See images [1] or [2] for illustration.)

I also already have a human face mesh with a neutral emotion, and a human body mesh, in a “T” pose. Both of them have a “good” quality (“good” topology and so on).

What I’d like to do is animate the face/body, using the reference landmarks/joints, in order to create a human mesh with facial emotions or a human mesh in a certain pose (e.g. in a sitting pose).

I suppose that I have to rig the two meshes, in order to deform them later using the 3D landmarks/joints positions I have. However, I don’t know how to deform a mesh in a consistent way (using the position only, with potentially variable-length bones), and the tutorials I’ve found (and followed) are not as relevant as I thought to my special case.

I know it is possible to do what I’m looking for in blender, and even in realtime (see [3] for demo), but I don’t know how to do it (properly or at all). What should I do?

Thanks for your help ! :slight_smile:

(I can provide the meshes I created/bought, if needed.)

Note : if possible, I’d even like to make an animation from that (rather than a single frame, but the single frame is still very valuable for me). I’m fluent in python and somehow knew most important blender functions (before the 2.8 interface update).

[1] http://wct-inc.com/wp/wp-content/uploads/2014/04/face-recognition-2.jpg
[2] https://www.researchgate.net/profile/Minh_Nguyen202/publication/321892920/figure/fig3/AS:[email protected]/25-joints-of-MS-Kinect-human-skeleton-The-left-and-right-side-of-skeleton-are-swapped.png
[3] https:// twitter .com /hellorokoko/status/1227472439212478464