Are Blender's constraints fundamentally flawed?

It seems to me that the basic philosophy and architecture of Blender’s constraints is flawed. This leads to counterintuitive behavior that confounds “beginners” (really, anybody without a relatively advanced understanding of 3D math) and creates unnecessary limits even on people who do have an understanding of this math. The problems that are created are often pernicous, easily missed until you inspect your interpolations carefully.

Let’s start by looking at how Blender currently handles constraints. I haven’t inspected the code-- this understanding is gained only by seeing what Blender does. I could easily be making some mistakes.

We start with f-curve values, modify/replace by drivers, then turn those into a 4x4 transformation matrix. When a constraint is evaluated, that transformation matrix is converted first into a different space as necessary (world vs bone local vs etc) and is then converted into Euler rotation, translation, and scale triplets. Blender then does some simple addition on these components (or recently, in the case of some scaling operations, multiplication) then converts the updated rotation/translation/scale triplets into a new matrix. Then we look at the next constraint and repeat.

There are two main problems with handling things in this fashion. The first is aliasing– the fact that for any particular orientation, there exist multiple Euler rotation triplets. This isn’t just considering values outside the -180,180 range, and it’s not limited by the angle evaluation order. What that means is that when we convert Euler rotations to matrices and then back to Euler rotations, we don’t get the same values that we put in. In many cases, the individual values of the Euler rotation components are not at all what we expect, even though the complete orientations, composed of the entire triplet, are exactly the same. (And, particularly, when we scale these individual components by some influence multiplier, and take into account funky Euler angle interpolation, we don’t get anything remotely like what we wanted or expected.)

The second problem is that Blender is just replacing the matrix. In some cases, many cases, that’s fine. Where it breaks down is when we have skew: an object or bone with local rotation that has inherited non-uniform scale. Skewed transformation matrices do not have distinct scale vs. rotation components. When we create new matrices out of decomposed scale + rotation, parts of the original transformation are inappropriately discarded.

How should Blender be doing this? It should never be decomposing matrices into components. Constraints should not create numbers that are added to components. Instead constraints should be creating transformations that are matrix multiplied into existing transformation matrices.

Rather than addressing this fundamental problem with how constraints are handled, Blender versions have tried to apply band-aid fixes to individual issues on a case-by-case basis. The naive initial approach is the reason scale needed to have a multiplicative version added, it’s the reason that copy rotation offset mode needed to be redone-- none of that would have been an issue if Blender were applying constraints as matrix multiplications. It’s responsible for continuing constraint issues, some of which would qualify as bugs, some of which would qualify as feature requests, and most of which would fall into the wide, vague area between the two.

It’s entirely possible I’ve made some mistakes in thinking about this. I’m far from an expert myself. Any kind of response is welcome. Honestly, when it comes to constraints, I’ve just been holding my breath until we get node-based rigging, figuring that given sufficient tools, I can fix these problems myself. At the same time, I keep seeing more work done on these constraints, but the fixes never seem to be made with an understanding of the underlying problem, and so I’ve been increasingly concerned that developers don’t recognize the real issue.

I could certainly go into detail on how these problems impact nearly any (bone) constraint, and offer suggested changes to turn these constraints into matrix transformation operations. I think that in many cases, it’s pretty obvious to anybody conversant enough with 3D math, and if I tried to describe all of the impacts and fixes, it might end up being a short book.

4 Likes

Yes, you haven’t looked at the code, but since you’ve inferred a potential flaw in the constraint design, it might be worth bringing this up on the devtalk forum. Yes, the forum is just for developers, but since you have questions concerning how the constraint system works internally, they probably won’t have a problem when you sharing your concerns there.

I don’t think anyone other than a developer who has worked on the animation system could really give you a good answer concerning how the constraints does/should work.

1 Like

I wouldn’t be surprised if at some nearish future the constraints get reworked for the ‘everything nodes’ project. Definitely worth discussing this in the user feedback section of the devtalk forum. Especially before any significant work on constrainsts has begun.

And yeah, the current design is essentially from around mid-to-late noughties/2000’s. So to my mind it should be no surprise if it’s showing it’s age.

I couldn’t understand some of what was talked about. If you could you please provide a Blender scene that shows one of these problems I think I might get what you are talking about better.

If you’re not experienced with 3D math, it’s not going to make sense. Sorry, that’s just the way it is.

If you’re experienced with 3D math, and there’s some particular stumbling point for you, I can explain further (if you say where you’re having trouble.)

If you want a single example of problems created by this, take a look at attached file:

squasheyes.blend (171.9 KB)

It demonstrates the inability of a damped track constraint to deal with skew, in a realistic situation (a squash and stretch eye.) Compare moving the track to bone on the constrained armature to manually adjusting the rotation on the unconstrained armature. The unconstrained armature was created just by duplicating, applying visual transform, and removing the constraint.

But it would be a mistake to think that a single example demonstrates the problem. What I’m talking about here is an entire class of problems with a common root.

To others: thanks, I may post on Blender dev later. I don’t like bugging them, and criticism like this is always tricky-- better to offer it someplace that’s not “home”-- and I’ve seen Brecht here before. I’ll give it a day or two and then consider reposting there.

I wouldn’t be surprised to see a plan for node-based constraints, as it is up there on the list of features that would benefit the most from such a setup (with the others being particles and modifiers).

It might be a couple of years out though, as it is taking a while just to get all of the base code reviewed and committed (ie. without actually adding function visible to users).

Yes, a couple of years out is something that I would consider to be nearish future though. But then thats from my biased viewpoint as someone who has followed blenders development for a better part of a couple of decades now.

There is a lot going on there. There is a parent bone with a child bone. Child bone has all the influence over the geometry and a dampen constraint which wants to take over all the geometry the child controls. One would think it goes situation 1. parent influences child > child influences geometry > constraint influences child. What happens is situation 2. parent influences child > child influences geometry > constraint takes over child’s geometry while ignoring parent because the constraint has no parent, it also appears to influence the child bone even though it does not. The child bone does not actually rotate at all if we check on it as we move the target bone. It’s a matter of who controls what. The reason it is like Situation 2 now is because constraints only effect the geometry without effecting the rotation of the objects axis in all situations. Even when having a constraint on an object the constraint only effects the geometry of the object and not the objects axis. It is the way constraints work. I’m guessing they work like this because it makes it easier for constraints that deform stuff like maintain volume does.

Situation 1 could go on the individual constraint as a check box saying something like “Effect Bone Axis” toggle for more control per constraint. Giving that suggestion would be a good suggestion as it would expand what Blender can do.

As a side note it is cool the way it is now as the child bone with constraint can still be rotated, even though rotation is not seen, to have the geometry deform while keeping it staring at the target bone.

I’d go a bit further and say that a lot of Blender’s settings for constraints, as well as other stuff, needs to be translated from engineer/mathematician jargon into artist-oriented language.

1 Like

The field showing you transforms doesn’t show you post-constraint transforms. It shows you pre-constraint transforms that are then operated on by the constraints. Post-constraint transforms are not directly visible from Blender, but you can access them by using drivers from rotational difference, transform channel, etc.

It would be a waste to independently calculate the effects of the constraint on everything affected by the constrained bone. Part of the reason that bones work is because they give you a single, easy to calculate transform that you can then apply to any number of objects, vertices, bones, whatever. In other words-- the geometry doesn’t know about the constraint. It doesn’t need to. It only needs to know what the bone’s final (post-constraint) transform is.

The actual order of operations is, 1) parent influences child; 2) child is modified by f-curve/driven transform; 3) child is modified by constraints; 4) geometry is modified by child. Constraint never directly touches geometry.

However, even if you were to independently calculate the constraint for everything, it wouldn’t change anything about the behavior.

Constraints do affect objects’ axes. Armature modifiers, however, do not. If you want a constraint to affect an object’s axis, you’ll either need to bone parent something or you’ll need to use an object constraint. If you do either of those, you’ll clearly see constraints affecting objects’ axes.

What’s happening in that file does require some understanding of 3D math. You didn’t say how comfortable with that you were. The child bone is inheriting the parent’s scale in both armatures. In the constrained armature, this scale is then being separated out while rotation is recalculated, then locrotscale are recombined into a new 4x4 matrix. This leaves it with improperly inherited scale from its parent-- it’s basically ignoring the orientation of its parent’s scale, and adopting that scale matrix in its pre-constraint transformation as if it were local scale, losing all skew.

I did a test with object constraints. It doesn’t seem to effect the axis of the object.
Check it with the Transform panel under the Item tab when you hit N. The rotation of the object does not change with an object constraint.

The transform panel is the pre-constraint transform. Check the axes by using modifiers acting in local space, or by using object coordinates in materials, or any number of other ways.

If it operated like this it would operate like in my situation 1. It does not so it does not work like this.

As for getting post constraint axis I don’t know how to do this. Your going to have to guide me step by step. I’m not that familiar with drivers. Modifiers don’t display a rotation not sure how to get it from that. Materials should rotate with the geometry so no surprise there even if it can be checked it should rotate with the constraint which is effecting the geometry. With local, global, maybe a 3rd geometry axis I haven’t looked at code, there could be even more maybe you are getting the axis mixed up? The axis that seems to matter in the case of bones is the axis that shows up in the Transform. If that doesn’t change they are not rotating.

We’re getting way off topic here, watercycles. If you want to understand some of that stuff, feel free to get my attention in a different thread in one of the support forums. I can demonstrate any of those principles, but this isn’t the right thread for it.

Edit: the other advantage to making a new thread is that you’d have more people chiming in. If I was wrong, I’d get corrected, and if I was right, you’d see people echoing me and you’d have more confidence that what I was saying was correct.

This is showing up in the general forums so everyone can already see it.

Your post was confusing. I’m sure our discussion is clearing it up for anyone that reads it. The more details you put the better even the developers can understand the problem. The more we talk the better it will be for everyone that reads.

To fully demonstrate a fundamental problem in the way Blender calculates stuff there should be a minimum of 3 examples given where this fundamental flaw comes into play with each situation showing the same flaw repeating. You gave no examples in your first post. It makes it hard for even a developer to know what you are talking about.

1 Like

If anybody’s interested, I’ve had an interesting discussion with Brecht at https://devtalk.blender.org/t/fundamental-flaws-in-blenders-constraint-philosophy/13921 . I would be surprised if there was a better source for how Blender works than Brecht, so if anybody wants to learn anything… (And there are more than 3 examples in there for anyone interested.)

I don’t think Brecht agrees with me that Blender’s approach is inherently flawed, but I’m confident that he’ll be on the lookout for problems similar to the ones I identify as related to this design philosophy.

1 Like