Question about how Blendshapes are calculated.


I’ve been used to doing my own animations using morph targets. I have been working on a head animation and I now want to make it a full figure animation.

So when I was animating just the head/face I would (in c#) traverse the vertices and figure out what had to move around.

I’ve come back to Blender now and am moving my project over to Unity. I see blendshapes from Blender to Unity work well. I animated a cube and also managed to get my character to open her mouth and close it, using that method.

I might be coming at Blender the wrong way. I thought for efficiency I would have a model made of two groups - face and body. The theory was that I would only have blend shapes affecting the face/head.

I’ve also managed to rig the body, but I am having difficulty incorporating the head/face (separate group) into the rigging, it just not working. I was selecting the body plus the head then the armature and using CTRL-P to get the rigged model. Only trouble is that the body moves but the face doesn’t. It remains static.

If I join the head and the body prior to that, I get an error and nothing moves at all when moving the bones.

Then I just wondered if it wouldn’t be simpler to work from one model in the first place. So I would start with a neutral model and add blend shapes for each face shape (mouth, eyes etc). So my question is whether that is efficient use of the blend shapes in Blender.

Does for example the blendshape algorithm ignore vertices that do not change ? I mean I don’t want to waste processing power if it traverses the whole model. If it does that seems the simplest way to go. ie would there actually be any benefit from animated the head separately like I have been trying ?

I hope I made sense, thanks for looking and I would really love to get some advice :smiley:

A lot of views. Anyone have any thoughts ?