How to create a "Get Pose" rig?

I was just watching this talk from BCON2023 “Bringing Cube Creative character’s to life”, and at 22:55 the presenter introduces what she calls a “Get Pose” rig, which is used in specific scenarios and what it does is as follows:
When you have a model that needs to be rigged while it’s straight, for better deformations, but its default pose is another, a different one (like a curl, for example).
Then, they animate this rigs’ first 10 frames and after that they create another, animatable, rig. So, they use this new other rig to animate the character.

The broad process seems to be well explained, but maybe I’m too smooth-brained and I don’t get how they start a new rig from a posed one.

Is it as simple as creating a new rig, parent the model to it and rigging the model but with the first Armature modifier fully active before the new one? Or am I missing something?

Does anyone have experience with similar setups?

Thanks for your time!

I am not sure why the roundabout, it seems to me like you could just store the curled tail as a pose asset in your pose library and get on with your day. If you want to “bake” the bone transforms into the rest pose, you can always ctrl+a “apply as rest pose”. Perhaps I’m missing something because I don’t see a problem

1 Like

Hi, this is an extremely important domain for Organic Rigging.

I haven’t watched it yet, so I have no idea if that’s a Blender vanilla thing or if that is a simple Setup to achieve from scratch.

The main problem revolves around the brutal effects (“Artifacts”) of conventional CG Skinning methods/algorithms for more natural/organic/realistic Deforms, usually caused by Rotations on whichever Axis.

In Blender at least, I know there exist only 1 (one) possible Rest Position at a time; the Rest Position here I mean, is the 3D Model’s Mesh on Edit Mode, but more rigorously, its Skeleton/Armature Object set to Rest Position: Binding Pose State.

Shape Keys, although they create ‘Mesh’ Keys or Morphs, only the “Basis” Shape Key represents the true Rest Position. All of the other Shape Keys they Morph the Mesh, but they bypass or ignore the effects of the Skinning method/algorithm, and vice-versa. So that Deforms (from Skinning method or Armature Deform) and Morphs (from Shape Keys), they are independent processes. But that you probably already know.

If we could have multiple Rest Positions on a 3D Model in Blender, then that would probably mean it would make/force some sort of “Interpolation” between, say, 2 radically different Rest Positions, calculations made by a new ‘composed’ Skinning method/algorithm (as if the 3D Model was Rigged twice-in-one in this example; so we could have for example a 3D Character with a T-Pose Rest Position, and another Rest Position with a highly Bent Limbs in a certain more useful Direction of Bending; that wouldn’t be sufficient of course because certain Limbs have all sorts of Bending Orientations, including in Opposite Directions; but 2 Rest Positions would already be crushing ludicrous in this conception!). So, hypothetically, that Skinning method/algorithm calculations’ Interpolation because of the presence of additional and simultaneous Rest Position in the same 3D Rigged Model, would smooth out the Generated Artifacts from Armature Deform, preventing them to even occur (sort of), hence offering a way more realistic/organic/intuitive way of Skinning.

In other words… if we could order additional Shape Keys to become additional Rest Positions, Shape Keys would be the ultimate Rigging method. But, this would be more like a dream. :sweat_smile: I believe this ain’t so complicated; but there’s just not enough people interested (and resources involved) in making Organic Rigging real good. Out there in the CG world, so not limited to Blender only, mostly, we are still in the ‘Neolitic Age’ of CG at most, when it comes to Rigging; at least this is my opinion.

I’ve made a few experiments some time ago… which relate to this subject.
I thought that, if, instead of having a single 3D Character, I had multiple Mesh Objects for itself, but each one offering a ‘different set of a Rest Position’. So, theoretically, IF these frankenstein Mesh Objects could be regularly and smoothly swapped in realtime for 3D Animation without some sort of quaky/shaky/warping/blinking result, then these could sort of substitute the role of a lot of Corrective methods. Unfortunately (or fortunately), this experiment didn’t show for me enough potential.

When I watch and (if I) understand the video’s propositions; I might be able to think further. But for now, these is my thoughts on the subject.

PS: I’ve just watched the clip concerning “Get Pose”.

I think everytime she is mentioning a Separated or Hidden “Armature”, this is no mere Armature Object; this is a new set of Rigged Model, or Armature Object + Mesh Object (or at least an additional Mesh Object, because I do not see the need for 2 Armature Objects). If that is the case, that might have something to do with the frankenstein-like experiment I mentioned before. There would be 2 different Mesh Objects representing the Pig-Tail (one with Straight Rest Position which might be the same of the whole Body, and then an additional one for the complex Coiled Rest Position), and each one of them would have ‘their own Rig’, that is, their own Skinning and Deform Bones, but they would be integrated by some sort of Automation mechanism, like Drivers (so that as one Mesh is Masked, disappears, the other one appears seamlessly), and have their Deforms approximation smoothed out (this I’m not sure how would be achieved adequately, it was the limitations that I wasn’t able to overcome) when the Pose of one starts to match the Pose of the other. It could be I’m completely wrong (about the reality of their presented Rig) and that @Hadriscus’s idea is closer to the truth; but, whichever the case, the Rigging issues I indicated are still real.

I saw the video when it came out and I still don’t get why the need to create another rig with the first deformation applied, it’s much easier and cleaner to just model/rig everything straight (tails, hair, accessories, etc) and save a pose with the intended “default” deformation, now with the asset libraries is even easier to organize than before.

At work we do this for things like hair and tails but also for the eyes, all the characters are modeled with the eyelids closed (because it’s way easier to rig and weight paint like this) and then save a pose with eyes open so when the animators link the character into a scene they already have the intended “default” pose saved.

2 Likes

Yeah, this is what baffles me a bit. They mention that this method “blocks” the animator from messing with the default pause, but I don’t understand how they achieve that (unless they use some scripting?).

I only have a limited use of Maya, but I read that you can create a rig in a “Binding Pose” and then have a default “Rest Pose” for easier use.

When I watched the video I assumed something like this was achieved, but now I’m not sure. :thinking::sweat_smile:

1 Like

If I remember correctly it’s possible to have several bind poses in Maya, if they switched from Maya to Blender maybe that’s the reason to come up with a similar workflow…?

I agree with @Hadriscus post.

And I agree with @julperado 's first post.

Create the pig’s tail straight (or the eyelid closed) mesh, rig & weight it. Then create a new pose, curled tail/eyelids open and set as rest pose or as a pose in a pose library. (btw, I haven’t played with pose lib since the 2.76 days, so I don’t know what’s new there)

So I backed up the video to 20:55 and started watching. Talking about facial shape keys. At 21:40 she talks about connect the shape keys with drivers and explains ‘those tools are pretty new to us…’

That statement kinda shocked me a bit. I mean drivers are kinda the backbone to rigging in blender…

I’d take parts of that video with a grain of salt…

Randy

2 Likes

I think there must be more to it, these guys were classmates of mine and they know their shit. But right now it’s not obvious why they chose to do this the way they did

2 Likes

Yeah, I’m just guessing here, but I’d say it has to do with the fact that they’re talking about building a pipeline around Blender coming from other software, and being used to handle things in a certain way for a pipeline based on another program, it makes sense they’d try to replicate basically the same workflow; and they started with version 2.79 if I recall correctly, that must have been difficult, to say the least hehe.

1 Like