I am trying to animate some kind of gauge for a GUI, kind of as an experiment to weigh the pros and cons of animating things like this versus coding them.
I have 2 bones which have assigned weights that add up to 1. When I try to pose this, I get a garbled mess. Did I not set it up correctly, or is the math of weighted vertex animation simply not intended for things like this?
You try with a bone rotation a mesh scaling along a circle.
Your bone have to make the same movement like your target mesh.
One not so simple solution are bones with a follow path constraints, another one (i thinks its the most simple one) is a curve modifier and one bone for the mesh scaling or you could also use two bones for the scaling.
However here is a file for testing, just scale the bone.
Another method is to use one bone in the armature, and then have an invisible plane for a boolean modifier on the gauge mesh, like so: zyl_gauge.blend (496 KB)
So, there is an actual mix factor per vertex per bone that is = weight / totalWeight, the position of the vertex from the influence of every participating bone is calculated, lerped using the mix factor and then those are added to the original position. What I expected was the bone rotations to get lerped instead. Ergo the problem lies within the maths.
I guess the reason for this is to save instructions in vertex shaders, as otherwise you’d have to reconstruct one quaternion per vertex per bone.
EDIT: Thinking about this some more, I realize that another downside would be vertices could only be weighted between two bones maximum, because of rotations not being commutative, but one can search for dual quaternion skinning and find this being done.