2.x Rigging refresh discussion

Hey hi all, Hadrien here, how’s everybody doing ?

Remember when Ton mentioned he wanted to upgrade the rigging system ? It got me thinking… since Blender development is so open to feedback, I thought why don’t we, as riggers and animators (and really everyone who has an opinion), start gathering ideas, listing pitfalls of the current system, and try to imagine what Blender two-point-something could look like ?

I’ve come up with a document that consists of my personal thoughts about this system. It is by no means exhaustive, complete, or even completely correct - I tried to gather all the stuff that I knew of, all of my own pet peeves, my own dreams. I made it free to comment, and I encourage everybody to add to the list here on this forum.

It’s meant to be an ongoing discussion - whether we choose to at some point contact the developers to present our thoughts more formally, it’s up to us. I know right now they’re all hovering ten thousand feet in a plane bound for Amsterdam, and the rigging system might the least of their worries. Then again, they’re all gathered and… they may very well be tempted to discuss it. :eyebrowlift2:

Head over here for the document.

Cheers,

Hadrien

Great initiative!

Not entirely sure this question fits in here, but as I figured it was too vague to make a whole new topic about it, I wonder if there is a way to make a sequence of bones in a rig switch between multiple different IK solvers (for example, if you wanted to have a leg in a model be able to move in both a digitigrade and plantigrade fashion)?

As far as I know, you have to have two different chains, one having two IK segments, the other having three, and finally a third bone chain copying transforms of the first two.
Alternatively, you can play with the stiffness values in bone properties, that works quite well in my experience (on the eoraptor example, lift the leg then move the corresponding slider).

Rest assured your question is totally relevant because one thing I would like to talk about is deduplication, which can play a big role in remedying the increasing complexity in character rigs.

There are no developers behind this so…I moving right along This place is a grave yard.

My dream list includes stealing a lot of ideas from R&Hs VooDoo in-house software. Their rigging tools were something else. Extending envelopes to scale on X and Y axes, and extending heat weighting to take envelope volume into account would go a long way.

https://vimeo.com/96958591

Ahhh nice one ! I remember all the videos that popped up when R&H sank, this is when I discovered delta mush… soon after it was in Blender - by the way I notice you were involved, cheers !
Checking out this article about Voodoo on FxGuide… and Matt Derksen’s reel is a joy to watch too. :o

Nice to know that I was not completely out of field in posing that question :slight_smile: I’m very much a novice in the field of 3DCG animation, and although I do find the rigging aspect to be very interesting and rewarding, some things do strike me as being quite tedious and/or discouragingly complex to accomplish.

It would be nice for a single IK chain to have multiple Pole Targets that one could switch (set as active) between indiscriminately, in order to allow for different movement behaviours. Perhaps this is something that could be solved by a node based system in the far future?

Hope we get more opinions in order to engage further interest in this topic. I do believe it’s just as important as the other more actively talked about areas of Blender.

I am still wary of these kind of initiatives especially if the are started without the devs showing an interest from the beginning. But having actually taken a look at them I would say most have already been mentioned by the devs as things they want to work on.

Joshua Leung(Aligorith) mentioned pose sculpting some time back but I don’t know if he is going to code quest just for grease pencil or can work on other things.

face maps was on for 2.8 then it wasn’t, I think Campbell was working on that

for character pickers and doing that through custom editors, I think now would really be a good time to poke the devs and get them to do it because the are reworking the interface.

Tyrant Monkey, as you said those are things that have been mentioned before, and very much explicitly by Ton himself quite recently (cf above). I don’t think anyone has anything to lose in that endeavour, except perhaps a little bit of energy.
Personally I don’t see that happening during the code quest (face maps, and especially custom editors sounds like a big project), seeing how much they already have on their plate… I wouldn’t hold my breath, but who knows ?
You’re welcome to participate if anything else comes to mind !

There’s more too : I contacted Joshua yesterday, sent him my little document, and this was the bulk of his reply :

I’ve had a quick skim through. You raise lots of good points about many areas. All I’ll say about this now is that in the doc I’m preparing, I take a slightly broader view of some things that touch on quite a few of the same problems you mentioned. Some key things here include:

  • “the role/nature of rigs” (e.g. mixins / bidirectionality / editing configurations / widgets+sculpting / visibility+flexibility of controls),
  • “editing across time” (ie. in short, ZBRUSH for animation × PSCULPT × multiframe edit × onionskins+paths × VR anim tool inspired stuff)

Personally “editing across time” made me think of Chronosculpt… as for multiframe editing this is reminiscent of what we can see right now in the grease pencil branch, and would go really well with onion skinning (to say the truth, it would probably be rather clunky to use without it).

Yamanote, agreed. Lots of stuff could be made simpler, yet at the same time it’s important to keep some low-level control over things. Houdini does that very well with its digital assets (which are basically Blender’s node groups) and shelf tools.
For your question : it’s always possible to have several objects/bones influencing a pole vector, so that you have only one pole vector switching from parent to parent (using ChildOf constraints).

One thing I really want is Bone/Surface Collision.

The goal is that while posing, Blender would perform quick collision checks to prevent characters from intersecting, rather than having to carefully tweak bones to make sure it’s not going through things.
Not an automated simulation after the fact, removing any control (like ragdolls).

Limit Distance constraint only works with an invisible sphere radius.
Floor constraint limits on only one square-face and globally.
Shrinkwrap will only stick the bones to a surface, and Project mode limits to one axis and one mesh (and I believe doesn’t account for deformations).
I’ve also tried setting up multiple physics meshes, rigid body joints, and locking bones to them one way or another but the convoluted steps still remove a lot of control and the time+complications+limitations makes it far faster/easier to just continue manual tweaking.

The idea would be to use a simple and dedicated collision mesh (like BBones) or a separate mesh, then when the meshes collide with something else with collisions enabled, the bone locks transforms like with the Floor constraint.

Another step could be to also rotate/lock bone rotation with collision at head/tail to enable more physics collision rather than just location (I’d assume this second step would require more performance/effort).

Another thing could be a global collisions property to either disable for performance so it’s not constantly performing the collision checks, or to enable collisions when you’re not actually posing anything.

@Hadriscus
In your document at #4, there’s already editable motion paths available as an addon which also rips from the Maya thing of the same name: Motion Trails It’s just forgotten/abandoned/ignored/etc, and doesn’t work for bones.

It’s faster to preview curves with it but it doesn’t work with proxy-rigs, so you can’t really use it even for that with linked characters.

It works well enough with objects but it doesn’t account for Delta Transforms and doesn’t work with/account for constraints. (Motion Paths doesn’t work correctly with constraints either so ¯_(ツ)_/¯ )

Although I am in favour of always improving accessibility and the handling procedure of any tool, I would never want it to come at the cost of restricted actual usability or a diminished range of application.

Not compromising in any of these aspects could certainly serve to increase the cost and/or time expenditure of developing and maintaining said tool, but if something has to give, then perhaps it is better to simply wait for times to change rather than managing compromised offerings that ultimately don’t end up serving anybody?

As mentioned earlier, having just begun to look into the different available tools within Blender available for animation, my questions will most likely more often than not arise from my complete lack of awareness of the multitude of alternate solutions already manifest in Blender; it was just a matter of me not having connected all the dots together before deciding to jump the gun on very specific issues that arise from my daily usage and consecutive though processes!

I will make sure to look into the different constraint modifier options. It is but one area that I have yet to actually sit down and take in as a whole.

I know that I know very little about C++ but from what I’ve seen in Blender’s code for NLA tracks (the closest thing to layers available) it basically iterates through the tracks/layers of all the strips then adds together the values from the curve in each action.

It should be really easy to say, add a track/layer into Actions.

It could look something like this, where only the active layer’s curves are adjusted and uses basically the same behavior from NLA tracks (such as start/blend_in and end/blend_out), or only applied in the available keyframe range.


The other layers’ curves can be grayed out or a UI color preference, similar to ghost-curves, to avoid overlapping clutter.

It could also be compatible with animations made before layers by defaulting available curves to layer 0, if they don’t have a layer value assigned.

This would be separate from Additive layers but it would still be layers and all I really think it’d need is a developer willing to do it.

my 2 cents

I think bendy bones are great, though i would like to be able to set them over existing bones (for only the frames i want).

I dont like that bones have the same color as the mesh in Xray mode, if they had a small transparant-tint would be more handy.

AutoIK IK… rather have something more simple to be able to stick a bone while moving the rest, ea right click lock (and optinal lock with bendy bone effect)

A path walking solver (sure animation is fun, but walking curves from A to B (long distance) isnt fun, sure there are ways around it but no… no fun).

Despite that it are bones who moves meshes around, i think some way of having a more flesh like reaction of skin (ea one character gives a punch, and that punch deepens the other character for some are). these things are quite hard to setup, but would be quite usefully for character animations (falling fighting etc).

Facial rigs are… well quite complex…is that realy the way to go??, how abou skull and muscle dialup wheels or so
Maybe select what eyelids are, what are eyes, and then have something else (it might be a lot of bones but i rather dont see them in the GUI )

How about dials / scroll wheels or so like it was done in makehuman, to pose, ea easily adjust X axis nor i type R X 30, or RX move mouse, dont like the typing here, i’d rather see some sliders

Graph diagram might optionally be placed over the main view increasing precision (larger view).

editing across time sounds indeed amazing, would be cool if it where possible for example to edit and modify watersimulations/smoke syms and other stuff (before polygonisation)where you then could also use next to sculpting stuff like Dyntopo to add resolution where its needed.

Before I update the doc properly, just chiming in again to add a little something that got me frustrated very recently : although copy location constraints can follow along a bbone’s curved shape, copy rotation constraints can not - I reckon local rotation is probably more difficult to derive than location, but the setup I had to come up with to have rotation is so much more complicated (on this gig we can’t use curves for pipeline reasons) I wish we were able to.

Hello Hadriscus! Here i’ll hook up my topic with this one. So if in understand most of advanced rig is based on expressions and API. It’s powerful yes, but maya has the same feature, but has also nodes that can build complex things without deal with math expressions or API coding. And, i think it would be nice to have the possibility to rig in a nodal way in blender, with some nodes based on this logic.

The actual problem is node interface and a “sandbox” side where you can connect about any value in any other value. For exemple, it’s difficult to extract a position from an object and get the tangents, normalized normals and get all those informations regrouped on a single matrix node. the rig on Blender is too abstract. I use Blender a lot in modeling, texturing, render is better and better and each step in Blender is huge. But i can’t find any stable workflow that allow me to reach the AAA features in rig.

Hi ssky, yes, that would be nice. There are some builtin driver types that kind of mimic some Maya nodes, which you can select from the driver interface : distance, rotational difference, etc. Then there are some high-level constraints too, like shrinkwrap, or floor. That’s about everything you can do without code. Hopefully the upcoming ‘everything nodes’ project will allow to pull arbitrary info from objects like curve tangent or closest point on surface, etc.


A commit by Alexander Gavrilov adds an “armature constraint” : https://developer.blender.org/rB798cdaeeb6927cb9ca42597fa23845eac04c02b2

If I understand correctly this allows constraining a bone as if it were a skinned mesh, so that it follows along the actual deformed mesh without creating a dependency cycle (which would occur when parenting said bone to a set of three vertices). This effectively allows to add surface-following controllers in a rig, but not sure how one would specify how they’re influenced by surrounding bones. Can we specify a subset of bones (bone group?) to get the blended matrix from ?

In any case, this is fantastic news.

1 Like

Hello. Nice to read this. Yes, in fact it coul dbe use to add addtionnal layer of deformation. In maya for exemple, it’s a joint or cluster that is attached to the deformed geometry, and we play with some matrix “tricks” to avoid double transformation etc. IF i understand well that’s this constrain is suppose to do.

Can’t wait to see the ‘everything nodes’ apply to rigging in blender.
If we have object types interactiions + nodes, Blender could reach the goal. The rigging pipeline is a huge piece in the package choice in production.

visuals for the interested:

the rig has four bones, one armature-constrained to the other three and all positioned at the centre. The constrained bone will float in the middle of the three target bones, plus the offset when you drag it manually. As far as I understand this was previously quite difficult to do (sans python).

3 Likes