Using multiple rigs for the same character?

I have a general question about animation workflow.

I like to do a mixture of motion capture (I have ipisoft) and traditional keyframe animation. It seems that most of the common rigs (rigify, pitchipoy, mhx, blenrig) are very hit-and-miss when it comes having mocap data applied to them (I generally use makewalk). Using makewalk generally requires creating a custom target file for these rigs, and even then the results are often not as stunning as I would hope (lots of foot sliding to clean up, even if the motion capture itself is quite clean).

What does work great(!!) for mocap is the good old-fashioned blender human meta-rig. This creates quite a dilemma. Is it common practice to have multiple copies of your characters each using different rigs throughout a given project. Maybe one that is rigged with the meta-rig for a wide angle motion capture shot, and a BlenRig version when you need a detailed close-up? What is the typical workflow here? It feels insane to have multiple copies of the same character, but I’m coming to the conclusion that it would make the most sense.

I say go with whatever works, why not?. If it takes two rigs, then use 2 rigs. Why would that be insane, it’s what’s needed.

Thinking thru the workflow, the only problems I see would be blending from one rig to the other. If the first 300 frames is mocap animation, and the 2nd 300 frames is keyframe animation, on the last frame of the mocap, frame 300, the other rig would have to match the same pose so there’s no ‘jump’ in movement.

You mention the human meta-rig, if by that you are meaning the default human meta-rig generated by rigify, then that is the route I would go and I would use rigify as the keyframe animation rig. This will make the switching much easier, since you are basically using 2 identical rigs with matching bone layout. One the last frame of the mocap animation, frame 300, you pose the rigify rig to match the same pose as close as possible to the mocap rig. Then on the meta rig, you could add copy loc/rot/scale constraints to all the bones of the meta-rig. The constraints could all be controlled by the same driver, then you could slowly increase the influence values of all the constraints over the last 10 frames of the mocap animation to blend poses.

Well, now that I’ve thought it thru, yea it would be insane, but it’s what’s needed…

Randy