The bone (or object) wouldn’t need a target, but in practice I don’t think the pyconstraint scripts allow to find out any other info about the object except its provided matrix. So I think the constrained object could target itself and so we would have access to the target_object data.
I can think on a bunch of ways of implementing this, but i’m really math-handicapped and nOOb with python.
I only have loose clues since I am reading thru the python API:
-There’s this evaluatePose(frame) method for objects, which I guess is opens a way to inquire about pose bones at diferent times.
-The idea is that bones with this constraint are inertial. When their parent/armature moves or rotates, the bone tries to stay behind, then accelerates, then disaccelerates, passes the point and accelerates back.
-tips and roots of involved bones can be used to calculate speeds and acceleration.
I don’t know if this can be implemented with no targets or targeting itself - but a second idea would be to have a second bone as target. The target bone is normal (stiff) and the constrained object is lagged.
The guts for this are already in Blender. What we need is a coder to link up the soft and rigid body physics systems directly to the armature system. Just to be clear, I am not that coder.
(this code does nothing, just proves some error)
If the BONE to which the constraint is applied is part of the same ARMATURE the target bone is, then evaluatePose(frame) produces an infinite doTarget loop, resulting in a ‘Memory Error’ in the console.
For now, I am testing with a second Armature.001, which is a complete copy but uses the same pose ipos. I managed to get a bone act as a ‘bunny ear’, oscillating after a lagged target acceleration using evaluatePose. But this inability to evalluate other bone within the same armature over time is frustrating.
I wrote a crude code. I feel in diapers coding in python so I prefer to explain what it does with an image:
Basically, doTarget calculates the target bone’s acceleration over the last N frames, and its current velocity. The best option is to use the parent bone as target and calculate motion of its TAIL, which is a vector independent of constraints or Ipo curves - it’s just a final position, so it makes things easier
then doConstraint bends the bone in an opposite direction of the acceleration vector, when in motion. When [the target, parent…] is stopped, the bone oscillates a little until it completely stops. The math is very simple, but the code is a piece of horror right now. I need to clean up in order to show it.
The problem with this is that it will not work in a confortable way, because: as I said before, if the targetbone is in the same armature object than the constrained bone, the evaluatePose method will create a infinite doTarget loop and a memory error, and the constraint won’t work.
That’s why in this example, the target bone is the bone’s parent in ANOTHER armature (see image) object. But the armature datablock and the action is the same.
I am not sure I will get help in this forum. Perhaps I should report this as a bug. I wonder if this is fixable.
The other way is using softbodies and parenting bones to vertices, but it’s a lot of a mess for a simple effect IMO.
I think that a GHOST armature object could be created, not linked to the scene, only once.
How do I code this?
-> Query if an object named ‘myarmature_ghost’ exists
-> Else create ‘myarmature_ghost’
Here’s a demo of using softbodies with armatures that’s very similar to the example you provided. Maybe you can somehow automate this setup with python?
Thanks, Kernod. Yes, I’d seen that method before. I was trying to avoid the mess of using real physics for such a simple effect but, now I have a python mess anyway