As the comment below the video indicates, this is my first attempt at facial motion capture in Blender. It is a test to get to grips with the concept and workflow - and is therefore quite rough. I specifically wanted to test transformations, i.e. not duplicating the original character, but projecting it onto a face of different proportions.
There is definitely a lot of room for improvement, but I am satisfied with the the test results for now. I have learnt a lot and will next work on improving the quality. At least I now know how to do it.
Please don’t comment on the quality, as that was not the intention yet
(PS: I originally had started this thread in the WIP forum, but I think that was incorrect. The test is complete anyway. When I bring out another one, it will be for the sake of improvement and optimising the pipeline)