Shoulder deformations rotated by multiple axes

I’ve been having trouble figuring out shoulder deformation caused by an armature. I’ve been using hook modifiers instead of shape keys but I wrote a Python script so I can use them like they were shape keys. I use hooks instead of shape keys because vertices move by an average of the bones that control it but by a sum of hooks. I tried attaching a .blend file but it said it could not be uploaded (it is 2.5MB). I attached a .blend from a previous thread which shows the armatures but not much for a mesh. The roll value for Shoulder.R should be -76.46 degrees to resolve another problem.

The problem is when the upper arm bone is rotated by more than one axis. What I’ve been doing is defining hook modifiers for an upper arm rotation of 90, 135, and 180 degrees (for both the X and Z axes). Do I need to create modifiers for every combination of these angles on the X and Z axes as well?

Searching online I found a thread from several years ago that included a link to a tutorial but it is based on an old version of Blender. Do I need to study the Python script to find a solution? Is there a simpler way? I actually found that link on a thread I started, but it was about deforming a shoulder when the upper arm is rotated 180 degrees on the axis that runs between the shoulders (for my armature the X axis). My issue now is how to correct shoulder deformation with a combination of X, Y, and Z rotations.

This might be complicated by using Eulers. I use Eulers because I have two armatures: one for me to control directly and the other to deform the mesh. The bones in the armature to deform has constraints to copy rotations from the bones on the other armature. When use quaternions or axis/angle the copy rotations results in the armatures not matching (some Eulers, I think XYZ and ZYX, also result an armatures not matching). I use YZX Eulers because I have separate bones on the armature for deforming for Y rotations and for XZ rotations.

I’ve gotten the impression that Eulers are not easy to work with. Is that true? How exactly do copy rotation constraints work when the rotation is set as a quaternion or axis/angle? Does it simply copy the unit vector’s value for that coordinate (or unit vector times cosine(angle/2) for quaternions)?


bbone_twist_20160908.blend (1.23 MB)

Im not quite sure what your saying, however you can use multiple shape keys in sequence. You can set up a shape key to start from lets say 90 to 120 degrees, using drivers.

The problem with sequential/stacked modifiers being driven by bone rotation values (which is what I assume drives your Hook modifiers) is that in the shoulder, the order of operations is much more important because of the many degrees of freedom on all axes this joint has. A particular arm position can be achieved using different rotation values for each axis depending on which axis is rotated first, second and third. So your hooks may be trying to compensate as if only one axis has been rotated, when in fact all three may have, leading to odd results. I had to deal with this problem when using pydriver scripts on the rig I used making Kata, and never quite resolved it perfectly. I’m now getting much better results using Transform constraints with Euler angles as the defining values, but I’m driving Bones with them, not Hooks. But Transformation constraints can also convert a rotation of a bone to a Location transform of the object using the constraint, so perhaps it will work with Hooks as well.

I have a bunch of bones in the armature that are driven by bones that control the body like the upper arm. I have six bones for each deformation: one for each of the positive and negative of the X, Y, and Z axes. Each bone is a parent for one hook modifier. So far, I have 84 hooks/bones. Each of the 84 bones is driven by the rotation on only one axis. There is also a unique vertex group (where most vertices in the group have different weights) for each hook.

I don’t think the hooks would compensate as if one axis has been rotated in my case since the driven bones which are parents of the hooks only change location and don’t rotate. I don’t think the order of hook modifiers on the stack matters when they don’t rotate or scale.

I wrote a Python function and put it in the so I can enter interval(var,[(0,0),(0.015,90),(0,135)],options=1) as the scripted expression for the driver (var is the rotation value of the upper arm around one axis in the local space). In the second argument, each tuple in the list gives as its first value the amount the driven bone should move at a certain rotation value given by the second value in the tuple. It uses a weighted average for intermediate angles. For example, when rotated 60 degrees, it will calculate (1/3)*0+(2/3)*0.15 = 0.1. The options=1 means to convert var from radians to degrees.

This was the problem I faced with my pydrivers expressions: they used a bone’s Rotation to proportionally change the Location of helper bones that tugged the mesh into a corrective shape much as your hooks seem to be doing. But I found that the order in which the bone rotations were evaluated did make a difference in the way the cumulative correction was applied, because achieving a certain arm position could be done with a number of different 3-axis bone rotation combinations, causing different Location values to be evaluated. It could very well be specific to how I was doing this , but figured it might be something for you to look at as well, since the methods seem somewhat similar. Again, this was specific to the shoulder where the 3-axis degrees of freedom is high.

I think I misunderstood. At first I thought you meant the order the hook modifiers are placed in the stack matters instead of the order a bone could rotate on its three axes to produce a specific rotation. A problem I have figuring this out is that I don’t know how X, Y, and Z rotation values are determined for a rotation when I use a rotation value of a bone as a variable for a driver or a copy rotation constraint. I just tested all six types of Eulers, and the copy rotation constraints only matched the two armatures for YZX and YXZ Eulers.

I looked up Euler angles on Wikipedia (the alternative definitions section) and it says (I think for XYZ Eulers) that the values are determined by what angles would be necessary to rotate it into position when it first rotates around the (local) z-axis, then the x-axis, then the z-axis again. Wikipedia says Euler values are not unique for a rotation (and I recall you said there are multiple ways for an object to rotate in a position). Wikipedia says when the second angle is 0, only the sum of the first and third are unique, but when I tried leaving the X value 0 for a YXZ Euler, the bone was not rotated the same as when I adjusted the Y and Z values to be different but have the same sum. I tried a YXZ Euler of (0,60,30) then set it to axis/angle, and the angle was 66.452 degrees with a unit vector of (-0.236, 0.881, 0.409). When I set it back to YXZ and the values to (0,30,60), after setting it to axis/angle again the angle was 66.452 again, but the unit vector was (-0.236, 0.409, 0.881) which is the same except the Y and Z values are transposed. Wikipedia makes sense because (for a YXZ Euler) when the second rotation is 0, it would be a first rotation about the z-axis, then no rotation, then another rotation around the z-axis, so the amount it rotates around the z-axis would basically be the sum of the first and third rotation.

I wonder if Blender is using a different definition of Eulers from Wikipedia’s definition.

Yep, it was the order of shoulder rotations I was referring to, and, more specifically, the different rotation values that could be used to achieve a near-identical arm position but produced different helper bone shifts. That’s why I was so pleased to see Transformation constraints introduced, as they do the same thing (converting a rotation to a shift in location) but seem much more reliable in terms of their results. Plus I don’t have to delve into the API and try to figure how to use the different Spaces (World, Local, Pose, etc.), it can be done quickly by simply experimenting with the constraint settings. Everything I’ve managed to do with automated correctives and muscle emulation has arisen from the advent of the Transformation constraint and a few others that premiered with it.

I tried experimenting with Transform constraints but still have problems. For most helper bones, I have two constraints to cause the bone to peak at a certain angle instead of remain in place for greater angles. For example, a bone that controls a hook when the arm is rotated 90 degrees has a constraint with its source range 0-90 degrees, and another constraint with its source 90-135. The first constraint has a destination motion of 0 to 0.35 and second has 0 to -0.35. The reason for this is that the first constraint moves the bone forward as the arm is rotated from 0 to 90, and the second constraint moves the bone back as it moves from 90 to 135. However, this meant making the max lower than the min which I needed to do so the bone would backward instead of forward again from 90 to 135.

A problem is that when the armature is at its resting state, the bones with a source range of 135 to 180 are not at their resting place. I tried changing the 180 to 170 (in case 180 is interpreted as equivalent to -180) but that had no effect. Other than that, bones affected by the X rotation of the arm seem to be working but not those that affect the Z rotation. The bones affected by Z rotation maintain a constant location within the ranges of 0-90, 90-135, and 135-180. This might be because the arm rotates in the negative direction on the Z axis.

I have attached a .blend file of my issue but I had to delete most of the mesh to get the file size low enough to post. Although there are helper bones on the left and right side, I have only done things with the right side so far.


body_20160929at10.blend (1.49 MB)

I’ve had to do a few situations where dual source ranges and 2 constraints are needed, and I think I see a problem with your explanation, if I read it right:

Trans Constraint 1
Source: Min 0 Max +90
Destin: Min 0 Max 0.35

Trans Constraint 2
Source: Min +90 Max +135
Destin: Min 0 Max -0.35

The problem here is that you are defining two different offsets for the same rotation angle (90) – Const 1 is 0.35 and Const 2 is 0. This will glitch things up badly. Remember that all offsets are based relative to the bone at rest, angle of 0. So for Constraint 2, the Source is OK but the Destin should be Min 0.35 (matches the Const 1 value for 90 Source) and Max 0 (moves back relative to the 90 source value).

This is confusing, I know, I struggled with it also. The problem is assuming Min & Max mean the same thing for both Source & Destin values. What is actually the case is, Min & Max refer to the Source rotation ranges as expected (correct as you have used them), but for Destin value, Min means the value to be used for the Min Source, and Max the value for the Max Source. Thus for the Destin values, Min can be larger than Max with no problems.

Also, in Rest Position, all constraints are disabled, so any bones under constraint should return to their default resting positions/rotations, etc. However, if you mean that with all Transforms zeroed (not the same as the Armature in Rest Position) some bones are offset, that is normal, as the constraints are in effect. But at times you have to balance the Source/Destination values so that zeroed transforms = zeroed constraint effects. That’s another dialogue entirely and is very specific to a particular situation.

I can’t look at your .blend right now (I’m rendering something I can’t interrupt or make render slower by opening 2 Blender instances) but will when I get the chance. Hope this info helps.

I had the impression that when there are two transform constraints for a bone the values will be added. The local locations I want the bone to have based on the arm rotation is:

Arm rotation: 0; Bone location: 0
Arm rotation: 45; Bone location: 0.175
Arm rotation: 90; Bone location: 0.35
Arm rotation: 112.5; Bone location: 0.175
Arm rotation: 135; Bone location: 0

It seemed that when there was only one constraint with source rotations from 0 to 90, then for any angle greater than 90, the bone’s location would be the maximum destination value in the transform constraint. I created the second constraint with the negative of the destination of the first constraint to counter that effect. For example, for an arm rotation of 122.5 degrees, I expected the first constraint to result in a location of 0.35 and the second to result in -0.175. I thought the two values would be added for 0.35+(-0.175) = 0.175 to set the bone’s location. I also thought at 90 degrees, the first constraint would result in 0.35 and the second result in 0, so the bone’s location would be 0.35+0=0.35.

Also, if you later look at the .blend file, that example is for the bone “AdjShoulder90+Z.R” (in the grid of bones to the right, the top row, second away from the mesh and armature). The source is for the X values and destination for the Z values and I mapped:

Z >> X
Y >> Y
X >> Z

The Y and Z source and X and Y destination are all 0.

Constraint results are not added, they are evaluated sequentially based on the order of the stack of constraints, topmost first, bottom last. This in some cases can be a determining factor, but good planning can prevent that in most cases. All the computed values are offsets from the zero transform of the constrained object (in the case of Scale that means the zero value = 1.0).

Example: Const 1 source = 90, Destin = 0.50 (a rotation to location transformation). When source = 90, constrained object moves +0.50 BU from zero (unconstrained) location. If Const 2 Source = 90, Destin = 0, it will immediately return the constrained object to its zero point for that transform channel, as it is the most currently evaluated constraint.

Expressed as a narrative the dual constraint I described in post #9 above would do this: Const#1-- If target rotation is 0 - 90 deg, move the object 0 - 0.35 BU offset proportionally. If >90 degrees hold @ 0.35BU; Const #2 – If target rotation is 90 to 135, move the object from 0.35 BU to 0 BU offset proportionally. If >135 hold @ 0 BU offset. Thus the non-linear (out & back) motion desired is programmed between the two constraints, which can only deal by themselves with linear (proportional) solutions. In this case order of evaluation is not critical but it’s likely best to do it logically, so Const #2 follows Const #1. When doing this it’s really important to avoid overlapping Source ranges for the same transform channel, as they can introduce glitches by returning conflicting values, and only the last evaluated will be used.

I’ve modified my bone constraints so the second constraint has the min/max destination values the reverse of the first one instead of being the negative version of the first one. That is, for the first constraint min=0, max=0.35, and the second min=0.35, max=0. This still doesn’t work and I think it is because it uses the min destination value when the source is rotated less than the min source value (and uses the max destination value when the source is rotated past the max source value). For the constraint with a source min = 90 and source max = 135, destination min = 0.35 and destination max = 0, I want it to evaluate to 0 for all source rotations with an angle less than 90 or greater than 135. However, when the source is rotated by 0 degrees, the constraint evaluates to 0.35 instead of 0.

Try switching the constraint order, I did not think it through as thoroughly as I should have. And if your Source’s range of possible rotations is <0 or >135 you may need other constraints to handle those situations. I always set Transform constraints to handle practical Max and Min source values, so any “fringe” values are covered, and with fully linear offsets as much as possible, but it may not be the best situation for how you have set up your hooks and corrective bones. Again, I’ll look at your .blend when I have a chance, it’s always best to deal in specifics if possible.

EDIT: Switching constraint order probably will not work either for the same reason, given the constraints as described; I know I did something like this at one point to compensate for a retrograde motion problem, but would have to dig the file out & study it, it was a while ago.

Switching the constraint order ended up having no effect. When I think about what you said about constraints being added sequentially, I think it can’t work with two transform constraints because the first will always be overridden by the second.

I’m thinking now maybe I should try creating a bone for the source bone with a transform constraint that copies the source bone’s rotation of each axis to the destination location of the same axis. This way I could use a driver to get the rotation values determined by the transform constraint.

Actually I said they were evaluated sequentially, not added, but given the current circumstances, you’re right, two constraints will not work because there are outlying source values and retrograde motions needed, but I’m sure I did something similar at one point, I just don’t recall exactly the configuration I used.

I finally got a chance to look over your file, and man, that is one complex rig you got going there. To be honest, and I in no way seek to discount your effort, that’s admirable, but I think the rig may be too complex for what you are wanting to do. It looks to me like your are trying to use many, many helper bones to correct for deformation problems that are better solved by more efficient rig design and better weight painting. Keeping things as simple as possible is always a good rule to abide by when designing a rig, so starting with a bones only, no helpers design and getting it to perform to its very best possible level is the first step. That means studying rigs that perform well and learning why they do. One of my primary “learning tools” was the Sintel rig (still available on BlendSwap, I think) not a perfect design but quite useful, and it can be stripped down to bare bones if need be and built back up to customize it, which is what I did for my current rig.

Once the basic rig (from whatever source, borrowed or DIY) can do decent deformations all on its own, you can take a look at where it’s having probs and plan on helper mechanisms to correct for the remaining issues, probably mostly in the shoulder region, but also possibly the elbow and knee joints, where correcting for mesh pinching and maintaining volume are often concerns. This is what I did with my rig for Kata, which I built from the ground up, and used pydriver scripts (now obsolete tech) to drive its corrective bones.

In terms of your current design, some things to consider:

  1. Two armatures may be overkill – all the bones can belong to the same rig and still function as you intend them, by making them either Deforming or not. As an example, the Sintel rig has a number of such control/deformer layers all in the same rig, divided up by bone layers for ease of working with them.

  2. Consider putting your helper bones near the places they are going to be working on the mesh, it may clarify exactly what kind of motion is needed to accomplish the corrections and/or other deformations (such as muscle emulation). It can also make identifying which helpers do what a lot easier, as they are already in a positional context.

I hope what I’ve written here isn’t a downer, I can see what your attempting and respect your efforts so far, I just wonder if some streamlining would make the task a lot easier to accomplish.

It looks to me like your are trying to use many, many helper bones to correct for deformation problems that are better solved by more efficient rig design and better weight painting.

The reason I created a large number of helper bones was to replace shape keys. I wrote a Python script that first stores the coordinates of all vertices of a mesh into a Blender text document. Then I edit the mesh to where I want the vertices to be in a deformation. When it looks good I push a button and the Python script creates six vertex groups (one for each axis and positive/negative direction) and it resets vertices of the mesh back to the coordinates stored in the text file.

When the Python script creates the vertex groups, it stores the change in location of each vertex from what was stored in the text file to where I moved it in edit mode. It finds the maximum and minimum amounts each vertex has moved for each axis. Then it adds each vertex that has moved along the axis and (positive or negative) direction for the respective vertex group, and it weights each vertex by the amount it has moved divided by the maximum (for groups representing the positive direction) or minimum (for groups representing negative direction). I had to use six bones for each deformation because the weights are different for each axis, and I can’t use negative weights so I needed separate groups for positive and negative.

It didn’t seem complicated to me because the 42 helper bones move based on constraints or drivers so it doesn’t require me to do things with them while I am using my armature.

  1. Two armatures may be overkill – all the bones can belong to the same rig and still function as you intend them, by making them either Deforming or not. As an example, the Sintel rig has a number of such control/deformer layers all in the same rig, divided up by bone layers for ease of working with them.

I have two armatures so I won’t keep accidentally clicking helper bones instead of bones I control directly, and also do to things like separate Y rotations from X and Z rotations (for the arms and legs).

One of my primary “learning tools” was the Sintel rig (still available on BlendSwap, I think) not a perfect design but quite useful, and it can be stripped down to bare bones if need be and built back up to customize it, which is what I did for my current rig.

I have downloaded .blend files with rigs in the past that I’ve found on the web and had trouble figuring them out. For example, when I was trying to figure out how arm rotations can be handled, I looked at the Proog rig from Elephant’s Dream, but when I rotate the lower arm, only the hand rotates and not the lower arm.

One of my primary “learning tools” was the Sintel rig (still available on BlendSwap, I think) not a perfect design but quite useful, and it can be stripped down to bare bones if need be and built back up to customize it, which is what I did for my current rig.

I’ll try to look for the Sintel rig.

I hope what I’ve written here isn’t a downer, I can see what your attempting and respect your efforts so far, I just wonder if some streamlining would make the task a lot easier to accomplish.

I appreciate the time you’ve taken to give me feedback. What I find a downer is when I can’t make something work in Blender and don’t know how to improve it. I don’t find it a downer for my work to be criticized because criticism is the only way I’ll know how to fix it.

Exactly why I did the pydrivers scripts for my Kata rig, which shows how different paths can lead to the same destination. No absolute answers.

You certainly seem to have thought your approach through and I wish you well in your efforts to bring it to fruition. BTW I prefer the word “critique” to “criticism” as it denotes positive feedback, which was my intent. It’s good to know you took my comments in the spirit in which they were offered. :smiley: