Muscle painting *Images/ explanation inside*

So i was thinking of rigging and how tedious it can be to create the complex system of bones and constraints that are required for a good qaulity facial rig.

Then i got thinking, why not have a tool that allows the user to effectively paint on layers of muscle, layer and influence those muscles (muscles groups) and finally add user definsed anchor points to those muscles to quickly allow a user to rig a complex character.

This is just an idea, something i was thinking about, i don’t posses the talent to code this, i wish i did, but anyway i’m posting it here in the hopes that a developer/ script writer may see it as a good idea.

Basic usage;
(I’ve ommited how this would tie in with the current system i.e. add a armature first, make it a modifer, make it an item in the context drop down for the 3D window along with wieght pant etc).

  • User goes into mode/ applies modifier for muscle drawing (something along those lines)
  • User creates muscle layer one, called muscle.001
  • User renames layer to R.Cheek_muscle.001
  • User paints a streak from the corner of the mouth to a point just infront of the characters ear, a blue line is drawn across/ over the mesh
  • Two anchor points are created, one at the start and one at the end
  • The user wants an anchor point in the middle to allow more flexible control of the cheek area, so he ctrl+clicks on that area over the muscle
  • An anchor point is created and added under the R.Cheek_muscle.001 layer

Now the layer (A layer is essentially a muscle, but it’s stored as a layer in the UI, this is needed because the user can layer bones, during this process anchor points and influences can be changed i.e. anchor points can be merged if several anchor points fall in one place).

The core;
(This is how i see it working, or could possibly work)

When a user is drawing a muscle, blender is drawing a group of bones, the user has an option to enable this as viewable, but its hidden as default, that way the user only sees a muscle being drawn on screen.

These bones are given an IK setup, with a root and target, the root is the bone which cannot be moved from the position it is placed, in this case it would be the bone that lands next to the ear, the user would have to select this as the root anchor point.

The target is the bone which basically has control over the group. It was placed at the corner of the mouth so when moving this anchor point it would move the corner of the lip, some of the lip area, the cheek area and the parts around the root (but only very slightly).

The user would be able to wieght/ influence paint onto the muscle directly, like wieght painting. The higher the influence the more area it moves, the lower, the less it moves, also the user can choose the fall off type (with a curve?).

Now the anchor points can also have an influence value, the main reason why i think thisshould be is because a user may layer several bones over one another to create a complex system, some of these anchor points may fall over one another, which means the user would need to control how much these anchors influence the muscle, and the surrounding muscles (if possible). This would just be with a number from 0.0 to 1.0 and also with a curve to control how sharp/ smooth the fall off is.

Also the merging of anchor points would warrant the user to set influences per muscle per anchor.

Here is a very very rough screenshot of how this would look with one muscle/ layer and 3 anchor points drawn onto a model.

I believe we could start with actual deformers and just make them work better (or eve as user would expected).

One solution is to have better curves deformer functionality (“muscle” in your proposal). So first we would define the shape of the 3d curve and weight paint its effect, the deformation of the mesh (face) would follow deforming of the curve BUT NOT IN ITS ORIGIN SHAPE WHEN WE SET UP THE CURVE MODIFIER.

Also hooks with weight-painted effect (something similar to proportional edit falloff) would help a lot. Or we could have option for diferent predefined styles of hooks falloff like we have for proportional edit falloff. Yes something similar to cluster in Maya but even better.

Probably this should not be too complicated to code maybe even during the 2.49 to 2.50 porting? Concerning curves maybe Z-twist have to be solved first (but I believe in 2.50 was mentioned to solve this issue anyway.)

Great examples (from Maya) for fast facial rigging (at the same time applicable to more meshes) for inspiration are here:

And of course - face robot uses this kind of approach for some face areas as well:

Edit: I believe Angela Guenette and Nathan Vegdahl form Durian team may push for such functionality as well. If they were to rig dozens of characters (Crowd of monsters running down the hill. :slight_smile: such transferable tools would be really nice.

Isn’t this what etch-a-ton is supposed to achieve, quick rigging through a ‘painting’ interface?

I assume at least because I watched the demo video and was just like, “Huh?”

Somebody has to make the initial rig that can be painted on though.

Better to have an actual muscle deformer than an IK chain of bones. Painting the muscles on could be good, but I could easily see extending Martin’s etchaton tool for muscle sketching, allowing snapping of muscles to surface/volume/bone anchor points.
IK chains might work for some situations,but using them as a general surface deform too “as is” would be potentially artifact-ridden.

Daniel 8488, I created a rig called BlenRig, which has a true muscle system based on srinkwrap. Currently I use the muscles exclusively for the body, not for the face. The whole rig is mainly controlled by two Mesh Deform modifiers, one for the body and anotherone for the face. This last aspect makes the whole rig totally transferable with ease.

For the facial rigging, I used a Mesh Deform Cage, which you can transfer to any character you want in no time.

I think that using Mesh Deform rigging is the best way to go for rig transferability.

You can download BlenRIg 3.0 (which is still in a prerealease state but is totally functional) at

Here´s theWIP page in the forums.

And Here is the Vimeo link for a quick description on how the rig works:

I have been experimenting with a softbody simulated mesh deformer. I have not done face yet, and the character is not exactly muscular. But, blender meshdeform has a lesser known “feature”: only faces in the deform cage are used by the meshdef modifier. You can add edges without faces which will be used by the softbody simulation, but will not appear in the animation. These non-face vertices can act as hooks, physically simulated hooks.

These hook vertices (I call them “inner verts” because they are usually extruded inside the deform cage) are also set as the softbody goal vertex group, and they are also driven by the armature as normal. The outer verts, which form the faces that actually deform the geometry, are completely simulated by the softbody (softbody modifier must be below armature in the stack). The outer vertices follow the inner verts according to the softbody simulation, making for a very convincing layer of adiposity.

But I am convinced that it would be possible to do muscles as well as fat using softbody hooks (and not just mimetic muscles). The current system is just not very friendly towards it. For example, it is very difficult to weight paint vertices with no faces extruded inside a closed shell. Also, there is very little that you can tweak in the current softbody modifier: there is one weight for all vertices, one edge stiffness for all edges. The only things that can be tweaked at vertex level are the length of the “hook” edges, and the goal weight.

In real muscle, tension and relaxation is animated by the nerves. To put that in terms of the softbody simulation, the spring stiffness would be animated. That would be more realistic, but it would probably impact the solver, slowing it down.

As a comment to the first post, I think what is described is (or will be) possible with a python script.

Have you seen a preview of what they’re adding for Zspheres? It’s more of a modeling implementation of such a tech, but it looks like a similar philosophy.