Long(er) time users of Blender, can someone explain the philosophy of the Pose Mode system for animating a rig?
I still find it a very cumbersome way of working, especially compared to other 3D applications.
The going back & forth when rigging and parenting, tweaking, animating etc. is so different from other rigging & animating solutions out there…
I still feel I’m jumping through too many hoops just to get some things done. This is not a rant against Blender, but I am seriously curious why they implemented this system as it is.
Motionbuilder, at least the version I used, which is primarily an animation system, had edit separate from pose. For me, the separation means it’s less likely you end up editing your rig instead of posing it. Surely the other software must also have some distinction between edtiing a rig and posing? What if you move / rotate a bone? How does the software know you are editing as opposed to posing?
“Normally” you would build your rig with controllers, and -only- animate the latter. So most of the rig is non selectable or hidden, and hard to mess up.
It also makes it quite easy to transfer animation between rigs that are similar in setup, as you have a defined set of controls.
And yes, most apps have tools for tweaking your rig, but those apps do not have this distinct animation difference like in Blender.
I’m just would like to know the reasons why this is the way it is
If you’d really like to not do pose mode, and animate via non-bone controllers, you can. Make an empty for every controlling bone, in the same axes as that bone. (This can be done with scripting, or by using a copy transforms followed by apply visual transform followed by clear constraints.) Then give the controlling bones copy transform constraints targeting the appropriate empty. Would also be wise to set deltas for your empties. Boom, no more pose mode.
Note that empties and bones have different transformation modes by default. If you want quaternions, set the empty to quaternion.
The main downside is that it’s a pain to set up, and totally unnecessary. Just offered as an option, it’s not something I would ever realistically do myself.
There are some differences in how an empty and a bone handle transformations-- a bone at rest has no transform, but an empty at delta has a transform. There are probably some situations where that would lead to different kinds of behavior, like when using f-curve modifiers or something, but I don’t think it’s a situation of bad/good, just different.
There are also drawbacks with regards to UI. Bones have some nice UI associated with them: bone layers, custom shapes if you like them, bone groups, etc. Some of that could probably be recreated for empties, but it’s not there by default. (And it’s kinda nice to be able to adjust your armature whenever you have a bone selected.)
Maya with MGear for rigging mostly, Softimage and a bit of Houdini. Although Houdini rigging is like the rest of the app… Hard to set up.
All these app follow more or less the same ‘rules’ for rigging, but Blender has the extra steps. I do have to say certain things are quite simple to do in Blender though, like adding bendy bones for instance. Less hassle.
@bandages basically explains a way how it is mostly done in other apps. Set up a rig, add controllers for the animable parts. Animate like any object.
Like I said, it’s not ranting against Blender, but I was wondering about the why…
In my opinion, from an “Animation and Rigging” point of view, the developers at Blender are far more concerned with still images and realism. And, from what I’ve seen, they do a hell of a job and GIVE IT AWAY. FREE. So, they are free to make their own ‘rules’.
Me? As a hobby, I like to animate short stories - and I chose Blender because they GIVE IT AWAY. FREE.
[I’m a software engineer, and tried to write my own app, once. Then I found Blender and the rest is history]