Ok, I guess I might have to explain how the animation system seems to work according to my rummaging through the source and blender experience:
object final transfrom = parent transform + user-defined transforms + constraint transforms
I’m not sure whether the parent transform is it’s final transform, or just the user-defined transform.
User-defined transforms are things like transforms in 3d-space that appear in n-key panel, ipo/action/nla value for object at given frame. IPO-drivers sort of come under here as well.
Constraints transforms. All the previous transforms are rolled into one lump and given to the constraints. Each constraint in turn, gets given this data with whatever changes the constraint(s) before it added, then it does its thing, and its changes are scaled back by the influence factor, and then its changes are integrated back into the final transform. Work is done on the matrices of the object.
If we look at the situation like this, we notice that the n-key panel only displays the USER-DEFINED transforms, not that of constraints OR parents.
User defined transforms are relative to the parent (if any), or to the world/origin. I’m not sure how inverse-parents work, as I have rarely/never used them.
Refer to blender.org documentation on how these work…
For your knowledge, constraints on bones which have local turned on (except Action for some reason), is evaluated differently from the normal constraint evaluation code. This means that: Copy Location, Copy Rotation, and my constraints Limit Location, Limit Rotation are in a seperate function. Here, instead of working on matrices of the bone(s), the location constraints directly access the location variable of the bone(s) that the n-key panel uses. Hence why that works that way.
… but, wait…
While writing these constraints, I started pondering the possibility of adding a new mode to the nodes. In this mode, users would be able to set up chains of transforms however they liked. Imagine it… Node Based Rigging!
IPO drivers and Action constraints could be superseeded by an exciting new alternative; the ability to link either one transform to another, or a whole set of transforms to others.
Shape keys could have their values strung together so that they influenced each other…
You could define what the constraints worked on, and even what they don’t work on…
With parenting, you could selectively choose what elements of an object’s transform was affected by the parent…
This sort of thing could make blender even more powerful than maya (I think ;)), but it would come at a cost. Firstly, users would need to be acutely aware of how all of these things interact, which means that there needs to be good documentation available that makes sure how the whole thing works is clear. Secondly, speed is a major consideration - customability usually means reduced speed, as instead of going along a single black+white path, the each step needs to be looked up (like reading a map). Thirdly, the amount of changes required to the inner workings of blender would be so much that whoever implements this (ton, me, any other daring soul, or nobody at all) would probably have a hard time doing so.