Morphing between rigged characters

I’m looking for some advice on morphing rigged characters. A similar questions was asked on this site in 2010 ("figure mixing" or morphing between rigged characters) so I’m wondering whether there have been any developments since then or whether anyone has any new insights/ suggestions.

To recap the problem with a hypothetical example: let’s say that I have two characters - a child and an adult. Their mesh surfaces are in the same pose and are already in correspondence so they have identical topology and I can simply apply one mesh as a shape key to morph between them.

Now let’s say that both characters are rigged with armatures that are identical in structure (e.g. number of bones), but differ in bone lengths/ joint locations. Do I need to morph the armature as well as the mesh in order to be able to pose/ animate intermediate morph characters? If so, how can I apply such transformations to the armature?

The easiest way to specify pose independently of the armature’s bone lengths is to specify the angle between connected bones at each joint rather than the location of each joint, since the latter is affected by both pose and bone lengths. I believe this is how mocap data is processed, so that -for example- Andy Serkis’s mocap data can be used to animate Gollum/ King Kong/ Caesar - characters that have different skeletal proportions to him.

Is there a straightforward way to achieve this in Blender? Writing a python script/ add-on to achieve this is going to involve a lot of trig! Any suggestions welcome. Thanks!

You’re overthinking it.

You can create a “variable bone” by making a control bone, a duplicated deform bone parented to the control, and giving the deform bone a copy local location inverse targeting the control bone. The deform bone will pivot around wherever you move the control bone to rather than moving with the control bone.

So you can have a child pose-- which is just the translation channels of the bones-- and an adult pose. You can lay this over the rotation channels of an animation in any fashion you’d like.

The actual lengths of bones doesn’t matter. If you scale the mesh, you don’t want to scale the bones too or else you’ll be double dipping.

1 Like

Ok, well that sounds like good news! I haven’t set up any rigging before, so it seems I’ve misunderstood some basics. What is the difference between a control bone and a deform bone?

Also, do you know if there are any online demos/ examples of the type of rig that you’ve described here? I was considering using Rigify for this task (since I haven’t created the rigs yet) - would I be able to create “variable bones” for such a rig?

A deform bone is used to deform the mesh-- meshes are weighted to them. A control bone is used only to manipulate other bones. A quick way to prevent bones from deforming meshes is to uncheck “Deform” in properties/bone.

Implementing this with rigify would be painful, because there are a lot of constraints and bones on an existing rigify rig. If you’ve never set up any rigging before, this entire project is, frankly, too advanced for you. Crawl before you walk, walk before you run.

Do you really need that feature ?
Do you need an animation where for example, the child is walking and growing into an adult ?

If not and your character need 2 or 3 states, it’s much better to rig them as separated characters.
Aniway , you can look into this rig , that allow to make several characters with the same rig : https://www.blendswap.com/blend/7417

I’d agree with bandages here that it’s quite a complicated tasks, because you’ll have to make all the features of the rig compatible with the change in scale.
And both state should be as easy to animate. All this is kind of easy in theory , quite tedious in practice. But if you really need it it’s doable.

Prior to that, try to break down your needs and see if there isn’t a simpler way, (like having separated characters) , another solution is to make a rig just for the transformation part. So you’ll have child rig, adult rig, and another simplified rig, that allow you to pose the character and make the transformation, without much of facial controls, IK/FK switch ect…
And at animation time, you just use the transform rig when it’s needed and switch back to watever character is needed.

It really depends on the case, if it’s for one shot , maybe you just need to cheat in that particular shot and use regular rigs elsewere.

My actual use situation is a bit different from the example I gave. Although I don’t currently need to be able to morph between character’s during an animated sequence of changing pose, it seems like that should be possible. I initially just need to be able to apply the same animation (e.g. walking) to characters with different body proportions but same structure.

To give a bit more detail: this task is for a scientific project rather than an artistic one. In addition to generating anatomically accurate surface meshes of different body shapes, I’m able to analyse skeletal structure to accurately determine the relative joint locations / bone lengths for each individual. I was wondering whether this more accurate data could be useful for constructing and/or morphing armatures?

The rigging task is certainly beyond my skill level, so I’ll most likely be outsourcing it. I just want to know what the best conceptual approach to this problem is that will give be the most anatomically accurate result and maximum flexibility in terms of later animation.

I’m looking to achieve something like described here: http://www.arishapiro.com/AvatarRiggingAndReshaping_FengCasasShapiro.pdf. However, unlike the 3D surface scan data used in that paper, I don’t have to guess where the joints are inside each body - I already have that data.

All that that paper is talking about is placing joints. It is not about animating.

A typical human rig is animated only via rotation data anyways. There is no need to do any translation based on scale. The one exception to this would be move bones, like IK targets, where it would sometimes be appropriate to rescale them based on leg length-- but it’s not a problem that can be automated.

The problems with sharing (rotational) animation data is that in real life, behaviors have purposes. If you take a short-armed character and tell them to touch their nose, then apply that same animation to a character with long arms, their arms will pass through their head. But if you ask them to flail their arms about wildly, the animations will roughly match. So how do you know if a particular animation is wild flailing, or directed motion? This information is not contained in any animation format, and animators would be irritated at having to include it-- their jobs are hard enough as it is.

Or consider the example of IK foot targets compared between a character with long legs and a character with short legs. You can scale the translation of these targets to leg length and maintain leg rotation. But if you play the short legged animation next to the long legged animation, the long legged animation will traverse a longer distance. What is matching animation in this case? There are three variables-- cadence, stride, and distance traversed. You can control for any two that you want (although leaving cadence as the free variable is tough, most animation packages aren’t built for that.)

Ok !

Thanks for adding more information on the type of project you’re working on ! This help to give better answers.

I’m not sure to understand everything, but :
-You may not need an advanced rig (like ones made by rigify) , because they are needed when you animate by hand. In your case, you may want to use some motion capture data, where the needed rig is way more simple (only deforming bones ) , for the arm you may need only 3 bones (upper_arm , lower_arm, hand) were on rigify rig you may have like 20 bones that adds extra controls for animators.

Sounds like a yes. As said, you may want to deal only with deforming bones, and it looks like with your method you can place bones accurately.
“Morphing” the rig is just a matter of scaling bones , things can get a bit more complicated, for example, in a walk animation you may want to keep foot on the ground, but it may be possible to calculate the needed offset.

Taking one animation and apply it to a different rig is called animation retargeting, it’s a common task especially when dealing with motion capture (mocap) data. As bandages pointed out, there are many issues when doing so, like his example when a small character touch his face , same animation applied on a bigger character lead to the hand going through the face.
But you may look for information on that because that seems in the lines of what you’re trying to achieve.
I can’t help you more because I’ve got very few knowledge on mocap and animation retargeting, I’m more into regular, hand made animation.
Hope that can help you anyway. From what I understand this shoudn’t be too complicated especially if you use the simplest rig ever and let motion capture do the animation work for you.

1 Like

A while ago I was looking to do something similar to the cases in this thread; adjusting a character’s armature to match the proportions formed by various shape keys while running a fluent animation. The problem is that while shape keys can be animated gradually, the switch to a different armature modifier seems to be an on/off state. I’m not experienced enough with f-curves and all that to know if there’s some trick to smoothly blend between two modifiers’ visibility.

While looking for ready-made solutions, I found this add-on which basically does the opposite of automatic weighting, namely creating an armature for existing weight groups: https://adared.ch/blender-automatic-armature-generation/
However, it requires that each vertex belongs only to one group so it failed for my character (one imported from MakeHuman). There was also some other error even after running the Limit total operation in weight paint mode, maybe a version incompatibility.

But now, for Blender 2.81 there has been a recent commit which makes the length of bones an edit-mode UI property and as such, more easily scriptable: https://developer.blender.org/rBd4f8bc80
Maybe someone with enough scripting skills could achieve a morphable armature with this. The commit says “This allows accessing it from drivers and using it in UI” but in the length property pop-up menu the Copy Data Path option is grayed out so I don’t know…?

You can do a multi-armature modifier, but you’re going to end up with volume loss from it. Better is to stick with a single morphable armature, which, like I said, is entirely possible:

Absolutely no scripting is required, although implementation of a script could certainly simplify the creation of bones and constraints needed.

Actually I think Rigify could even be the most straightforward solution here. All the main controllers can be scaled up and down, and the respective body parts will reflect that properly. Given you’ve enabled stretch 'n squash in your Rigify rig (on by default), you can abuse those “tweak” bones to stretch or compress limbs to match the desired proportions. All in all that allows a lot of character morphing. The tradeoff is of course that those controllers can’t be used for “regular” animation anymore, they’re now redirected to be morph controllers.
That can be remedied though.
In fact I did use Rigify in a similar way a while ago in order to create kind of a character creation system for standin bg characters. I used a simple second armature to control the morphs. It consisted of only a handful of disconnected bones, whose scale and translation would be transferred to the character’s main armature’s respective controller bones using CopyScale and CopyLocation constraints, local, and “offset” checked.

The approach has its limits concerning detail and accuracy, but is very simple to implement, and I wanted a quick solution …

PS.:
have been a bit quick writing that response, I’ve now read the rest of this thread including all the additional info you gave. That’s a scenario very different from the one I had in mind, no idea whether the approach I described is of any use here, that requires more thought … :slight_smile:

Ah! I couldn’t understand this at first, but indeed it looks like the initial location of a bone’s tail has little if any effect on posing.

Your variable bone method works really well. Though it’s a bit of work to set up for multiple bones without a script, it’s robust and I imagine it could be driven for any number of differently proportioned shapes with just one set of control bones. Many thanks!