Mutatis (customizable human model, like MB-Lab and Makehuman)

Hi, I am new here, and new-ish to Blender (I started learning when 2.80 came out).

I am sharing my progress on a project that I started a couple of months ago, and I also have a few questions regarding licensing.

  1. Mutatis (description and free download link: https://www.patreon.com/posts/32711739)

I call my project Mutatis, and it is a Blender file that combines the data from Makehuman 1.1.1 and some inspiration from MB-Lab.

Current features include:

  • All the shape keys (targets) from Makehuman 1.1.1’s base model (I tried to guess the formulas for the drivers, so the result doesn’t look identical to the official Makehuman app).
  • Full set of morphable male and female genitals (0.0: vulva, 0.5: nothing, 1.0: penis).
  • Fully procedural skin materials (skin includes freckles and detailed nippples, eye shape can be customized to do cat eyes, heart shape and more).
  • A barycentric armature (the main skeleton’s rest pose is computed using special vertex groups and a script).
  • A “proxyrig” mechanism to help deal with assets such as clothing and hair (it’s a special armature where each bone is parented to a vertex of the base mesh).
  • Full set of body hair (fur, beard, brows, etc).
  • Procedural tight clothing (by configuring the various “Clothing_XXX” nodes, it is possible to make shirts, pants, gloves, masks, bras, and even basic jewelry such as rings and bracelets).
  • For the clothing, I included some textures and materials from CC0 Textures and Blendermada, although I had to modify the Blendermada materials because I don’t use shader mixing (instead I created a few nodes to mix shader data, and everything ends up in a Principled BSDF).

Current limitations include:

  • Cycles only, as it seems that Eevee doesn’t have enough precision to decode the vector gradient datamaps (not bitmap) which I used for some skin elements.
  • No detailed teeth (just the basic white things included in the base Makehuman model).
  • Only one hairstyle (I gave up on trying to use the horrendous Blender API for hair scripting, in the future I will just use manual combing and simulation caching)
  • There is a huge 4k EXR map for the clothing system (in the future I will try to replace it with a datamap like I do for the skin).
  • Slow. Veeeery Slooow.
  • Crashy. Veeeery Crashy.
  • Extremely limited anime support (for now, I only included a shape key to mimic the design of MB-Lab’s female anime head).
  • No documentation yet (I will add that on my Patreon page as soon as I can).
  • Some unexpected deformations in the mesh (genitals).
  1. Questions

I added a text file in mutatis.blend that reads as follows:
START OF FILE

Mutatis © Errol Marnett
CC-BY 4.0, https://creativecommons.org/licenses/by/4.0/

This package uses elements from:
- Makehuman 1.1.1 (AGPL)
- Makehuman community assets (CC0)
- MB-Lab 1.7.7 (AGPL)
- CC0Textures (CC0)
- Blendermada (CC0)

END OF FILE

Does this seem acceptable? Do I need to include more information?

  1. Conclusion

I am releasing it for free and you can use it in any way you want, including in commercial games.

I did this initally as a hobby, but I got into it and now I really want to do more. I apologize if this seems a bit rushed (I tried to have something ready before the end of the year) and unprofessional (I have very low artistic skill). Hopefully the next version will be better.

Cheers

EM

3 Likes

I’ll check this out when I get home… can’t access Patreon at work.

For those who can’t access Patreon, here is an alternative link to the files: https://anonfile.com/R4j5SdJ7nf/mutatis-1.281.1_7z
(SHA-256: c10491ae30cca9042f7ece9fa68ab64993e3aaab56f5ff64e2584483df8b6110)
It is a 100 Mb 7z file which contains, among other things, the main mutatis.blend file (400 Mb) and the clothing gradient map (200 Mb).

Here is a render of the default female setup with hair and clothing:

I made some animated gifs to illustrate some of the features of Mutatis 1.281.1. (1/3)

mutatis-1.281.1_tutorial_clothing_anim_cropped

I made some animated gifs to illustrate some of the features of Mutatis 1.281.1. (2/3)

For this second gif, I customized the lighting and the skin shaders, but it’s all done with the same file I released earlier.

mutatis-1.281.1_tutorial_clothing_anim_cropped

I made some animated gifs to illustrate some of the features of Mutatis 1.281.1. (3/3)

mutatis-1.281.1_tutorial_eyes_anim_cropped

Gif 2/3.
(I posted the wrong picture earlier)

mutatis-1.281.1_tutorial_clothing_anim2_cropped

Technical Report: Gradient Datamaps (Part I)

I am working on the next version of Mutatis (1.281.2) which, if all goes well, should be ready by the end of the month.

This series of posts constitute a technical report about a specific technique that I use to compute gradient maps used in my procedural materials, and especially the clothing generator nodes. Part I (this post) explains the basic principles and part II (next post) will focus on the clothing generator and some of the improvements for Mutatis 1.281.2.

Since I don’t know what kind of readership to expect, I will try to strike a balance between being descriptive and succinct. Please let me know if this is too technical or, on the contrary, too simplistic.

There is going to be some math (homogeneous coordinates, dot product, cross product) but with plenty of illustrations, so hopefully it won’t be too difficult to follow.

Part I: Simple gradient datamap

I.1. The idea

When I started working on the skin shader for Mutatis, I quickly ran into a problem: how to create regions with soft borders for things such as flushing (skin redness) and freckles distribution.

I saw that MB-Lab used a gradient map (a grayscale texture), presumably hand-drawn, and I tried to do the same but was not really satisfied with the result (my drawing skills are bad).

Instead, I decided to opt for a more mechanical approach which consisted in defining elliptical regions in UVMap space. What I mean by that is:

  1. Open the UV editor, with UVMap selected (UVMap in this case being the Makehuman 1.1.1 UV map, with some modifications).
  2. Pick a point (coordinates: u, v).
  3. In the shader editor, compute the distance to this point in UVMap space.
  4. Optionally apply a space transformation to get an elliptical shape.
  5. Use value transformations to get various border types (soft, hard).

While this approach works great for eyes, nipples, freckles, flushing, it doesn’t work too well with lips and palms/soles because the shape isn’t really elliptical.


One possibility could have been to modify the UVMap to make the shapes elliptical. This might have worked for the lips, but would have been a lot more difficult for the palms and soles.

Instead, I needed a more general way to define regions with potentially fuzzy border, without having to draw anything by hand, and this is why I decided to use datamaps.

The idea was to compute a mathematical function for each polygon (the Makehuman mesh is made of quads only). This function would take a UVMap coordinate pair as input, and return a value between 0 and 1 indicating the “strength” of the region, so that:

  • Inside the region the value would be a constant 1.
  • At the borders, the value would go from 1 to 0 (outside).

To make this work in practice, I decided to consider a piecewise linear approximation of the borders so that each function would be a simple linear gradient.

The general equation for a linear gradient is:

value(x, y) = a x + b y + c

Where (a, b, c) are the coefficients of the gradient, and (x, y) are the coordinates in the space that we are interested in (could be geometry or UV).

Example with (a, b, c) = (1, 1, -1):


Applying a grayscale linear gradient then is quite straightforward:

  1. Determine the gradient coefficients (a, b, c)
  2. At each point (u, v) in UV space, switch to homogeneous coordinates: (u, v) → (u, v, 1)
  3. Compute the dot product (a, b, c) . (u, v, 1)

Example with (a, b, c) = (1, 1, -1):


We can overlay the gradient image to verify:

In this context, the datamaps are just a practical way to attach a set of coefficients to each quad (or even subquad by focusing on each individual vertex).

I.2. Implementation

I.2.1. Computing the coefficients

For the skin regions, I define various vertex groups (nails, lips, genitals, etc.) and use the weight as gradient value (for each polygon, each vertex associates a UV location with a weight).

The idea here is to solve a system of equations where each equation has the form a u + b v + c = weight:

  • (a, b, c) are the unknown
  • (u, v) and weight are defined for each vertex

To that end, I use numpy in a script to compute the coefficients:

(a, b, c) = numpy.linalg.lstsq(homogeneous_uvs, weights)[0]

I.2.2. Storing and using the coefficients

I initially wanted to use vertex colors to store the coefficients, but there were too many limitations, especially:

  • The lack of precision (8 bits per channel).
  • The complicated transformation automatically applied by Blender (sRGB I think).

So instead I created a new UV map named “Data”:


Because there are only about 14000 quads, in theory a 128x128 RGB image could be enough to store the coefficients (a, b, c) for each quad (128x128=16384).

And by doubling the resolution (256x256) it would even be possible to store different coefficients for each vertex, and then use an Image Texture node’s linear interpolation to get a nonlinear gradient.

There are a couple of practical issues though:

  • The precision has to be high enough (which is why I use the EXR format with 32 bits float channels).
  • Each channel value is limited to being between 0 and 1 (this is a problem because the coefficients can be any numbers, positive or negative).

To get around that last issue, I use an encoding/decoding mechanism that works as follows:

Encoding:

  1. Compute the scale
    s = max(|a|, |b|, |c|).
  2. Compute the encoded coefficients
    (R, G, B) = (((a, b, c) / s) + (1, 1, 1)) / 2.
  3. Compute the encoded scale
    A = 1 / s.
  4. Store the 4 values (R, G, B, A).

Notes:

  • The scale s is used to bring the coefficients between -1 and +1.
  • The addition of (1, 1, 1) and division by 2 bring the coefficients between 0 and 1.
  • As long as s is positive, 1 / s is between 0 and 1.

Decoding:

  1. Read the 4 values (R, G, B, A).
  2. Compute the decoded coefficients
    (a, b, c) = ((R, G, B) * 2 - (1, 1, 1)) / A.

Applying the gradient then is simply a matter of computing the dot product (a, b, c) . (u, v, 1) as explained previously:


By putting together several linear gradients, it is possible to get an approximation of a nonlinear gradient:
datamaps_05_anim

ERRATUM

I made a mistake in the previous post. For the encoding, the correct formula for the scale s is:
s = max(1, |a|, |b|, |c|)

Values below 1 have to be excluded so that 1 / s can be between 0 and 1.
In the case of small coefficients, the encoding and decoding work correctly because dividing the coefficients by 1 keeps them as they are.

(The script in Mutatis 1.281.1 uses the correct formula.)

Technical Report: Gradient Datamaps (Part II)
Part II: Clothing generator

2 comments before starting:

  • I am changing the numbering scheme for Mutatis. Next version will be 1.2.281 instead of 1.281.2.
  • This post is about a method to use gradient datamaps for clothing generation in Mutatis. In the following picture, the current version is on the left, and the improvement on the right.

I will now explain in detail how to do that.

II.1. Basics and first version

II.1.1. The idea

I wanted a simple way to generate simple clothing items (like underwear) that would always be sized and positioned properly on the base model. I remembered seeing a Youtube tutorial where the artist would duplicate the mesh to create a shirt (on a male model), and I figured that I could have something similar by keeping a duplicate mesh synchronized with the base mesh, and then use a procedural material with customizable transparency to create various shapes.

In Mutatis, the “synchronized with the base mesh” part is accomplished using what I call a proxyrig mechanism: there is a special proxyrig armature in which each bone is a child of a vertex of the base mesh. Unlike the Shrinkwrap modifier, the proxyrig bones stay at a predictable place relative to the unsubdivised* base mesh, regardless of what deformation is applied (shape key or armature).

* The Subdivision modifier causes the mesh surface to move unintuitively (up or down) relative to the unsubdivised surface. I will try to find a solution to that in the future.

To use this setup, I create:

  • A linked duplicate of the proxyrig (pictured above: clothing proxyrig with small offset enabled).
  • A duplicate of the base mesh without the shape keys and with the armature modifier set to the proxyrig linked duplicate.

I call this second object a proxymesh, and this technique is the basis for hair and clothing in Mutatis (it can also be extended to do other stuff, but more on that in future posts).

There are some apparent limitations with this method (the geometry of the clothing has to follow the geometry of the base mesh) but I thought it would be better than nothing as a starting point.

The hard part then was the customizable material. The relationship between long sleeves and short sleeves is obvious, and it doesn’t take long to notice that panties are kinda like very short trousers, and that bras are kinda like very skinny sleeveless shirts. In short, there might be a way to morph one clothing item into another based on the distance from the edges to some kind of wiry core.

I fiddled quite a bit with different techniques (the code is still buried in Mutatis 1.281.1, but I will clean it up for the new release). In the end, I settled on the following method:

  1. Create a shell, that is a simplified mesh where vertex weights help define a global gradient from the extremities to various “cores”.
  2. Position the vertices of the shell wrt to the clothing proxymesh, so that the core edges of the shell define natural separations between clothing items (not all the edges are core edges).
  3. Bake the interpolated weights into a texture appropriate for the clothing proxymesh (the picture below isn’t corrupted, the datamaps just look like that).

    In the next sections I will focus on step 3 (generation of the gradient and region data + node setup to use the data).

II.1.2. Implementation in Mutatis 1.281.1 (without datamaps)

II.1.2.1. Generating the data

I don’t know if Blender allows baking vertex group weights into textures, so I decided to do the baking myself using the following steps:

  1. Use a dummy VertexWeightMix modifier on the shell (for some reason, this seems necessary for the script to access the subdivision-interpolated weights; see picture in previous section for the parameters).
  2. Use a Subdivision modifier on the clothing proxymesh.
  3. Rasterize the polygons (quad) of the clothing proxymesh into a 4k bitmap image.
    1. For each texel, compute the corresponding 3D point (the rasterization functions that I use allow for some kind of approximation; you can take a look at the script in Mutatis 1.281.1 if you are interested in the fine technical details).
    2. Use Blender’s Object.closest_point_on_mesh to get the closest point on the shell. This is used to determine both the interpolated weight (see below) and the clothing region id (face map index + 1).
    3. Use Blender’s mathutils.interpolate.poly_3d_calc to get the coefficients allowing to compute the interpolated point weight.
    4. Combine the weight and encoded region id into the texel color (red channel: weight, .
  4. Apply multiple dilations to the image in order to:
    • Fill the holes left by the rasterization (the rasterization functions are a bit buggy).
    • Add a margin around the islands (otherwise there are visual flaws when zooming out, perhaps because of lower texture LODs).

II.1.2.2. Node setup

The node setup to use the image is straightforward: use the red channel to get the weight, and the blue channel to get the region.

Or at least it should be like that, but if you look at the ClothingRegions node group in Mutatis 1.281.1, I had to insert a UV quantization step before reading the region id, otherwise the borders between regions is not rendered properly in Cycles but only from a distance (close-up is fine).

I don’t know why this is necessary for Cycles but not Eevee. It could be a bug in Cycles, or it could be that I am doing something wrong but I don’t know what.

Either way, it won’t matter for 1.2.281 because I will probably be using datamaps as I explain in the next sections.

II.2. Planned improvements in Mutatis 1.2.281

II.2.1. Computation space and equation form

In Mutatis 1.281.1, some skin subregions are defined using simple datamaps (see previous post). The UV map “Data” is used for per-vertex/per-quad information storage, and the gradient computations are done using the UV map “UVMap”.

However, I defined the weights by hand with some intuitive expectation about the outcome, and this has implications which I did not realize at the time.

Consider for example the following configuration:
quad

The 2 vertices on the left have weight 0, and the 2 vertices on the right have weight 1. Defining the weights this way, I expect to see a gradient from 0 to 1 that is going from left to right while still matching the trapezoidal shape of the quad.

But this means that the gradient in geometric space is non-linear (it is bilinear in this example).

In UVMap space it would be linear if the UV quad is a parallelogram, otherwise it would be non-linear.

This is important because if the gradient is non-linear, then there won’t be an exact solution to the system of equations (described in part I), which will result in visual deformations, as can be seen by looking closely at the last picture in part I.

There are several ways to deal with this, for example:

  1. Perform the computations in a space where the quads are guaranteed to be parallelograms.
  2. Use more complicated equations.

For option 1, the obvious candidate is UV Data space. I will probably use this for the skin in Mutatis 1.2.281.


Note: In the picture above, the remaining sharp angles near the corner of the mouth are due to me not defining the weights carefully enough, but I will try to fix this in the upcoming release.

However, this alone is not sufficient for the clothing because the gradient is more complicated, regardless of the computation space. So I will also use slightly more complicated equations.

In part I, I have talked about linear gradients which can be expressed with a simple formula ax+by+c. It is possible to turn things up a notch by considering bilinear gradients.


In the picture above, disregarding the exact symbolic coefficients, the overall form of the formula is:

Auv+Bv+Cu+D

This is convenient because it can be expressed as a dot product:

(C, B, D, A) . (u, v, 1, uv)

In short, I just have to append the product u*v to the homogeneous form (u, v, 1), and then let numpy.linalg.lstsq do the rest. The solution will simply have 4 coefficients instead of 3. Of course, the shader also has to be upgraded to take into account this additional element (more on that later).

Theoretically, it should be possible to keep extending the list with x^2, x*y^2, and so on, but it seems that I don’t have to go that far just yet (based on my observations).

II.2.2. Partitioned gradient datamaps

The main reason why I didn’t use gradient datamaps for the clothing in Mutatis 1.281.1 is that simple datamaps don’t provide a way to fit multiple regions inside a quad.

This wasn’t an issue for the skin because lips, palms, soles and genitals are all well separated.

However, the clothing is made of adjacent regions with hard borders over a global gradient that can suddenly change direction inside the quads (typically in the chest area).

Before proceeding, I simplify the problem by assuming that:

  • At least one and at most two regions can be found inside a quad (this is not generally true, but the shell can be adjusted to limit the problematic cases).
  • When there are more than one region inside a quad, the separation is a straight line splitting the quad in 2 parts (again, not generally true, but close enough).
  • In each part, the gradient is approximately bilinear in UV Data space (same caveat as before).

With that in mind, the idea is to compute and store, at most, 2 gradients for each quad (one gradient for each region), and use the linear separation as a switch to select the correct region for the point being rendered in the shader.

II.2.2.1. Generating the data

This is very similar to the previous version, except that:

  • The rasterization only produces one texel for each quad (disregarding subdivision, that means producing ~14000 texels, which fit inside a 128x128 image).
  • The actual data is not a simple value but a more complicated structure requiring multiple images.

More precisely, I will use 4 RGBA images:

  • The first 2 will hold the encoded coefficients for the (approx) bilinear gradients, one image for each region.
  • The third will hold the encoded scales (see encoding in part I) and the encoded region ids (encoding/decoding a region id is simply dividing/multiplying by 32).
  • The fourth will hold the encoded coefficients and scale for the linear separator.

Thus, for each quad, the following algorithm will produce 4 RGBA “texels” (in UV Data space):

  1. For each “loop” (Blender terminology for topological half-edge or dart), determine a splitting point (I use dichotomic search with 16 steps). If there is no split, use the other end of the loop as default splitting point.

  2. Pick 2 splitting points (prioritize real splits over default splits).

  3. Use these 2 points to compute a linear separator in UV Data space.
    This is easily accomplished with projective geometry by computing the cross product of the homogeneous coordinates of the 2 points:

     sep_coeffs = (split1.u, split1.v, 1) x (split2.u, split2.v, 1)
    
  4. Apply the separator (dot product) to place the vertices and splitting points into 2 groups:

    • Group 1 contains both splitting points and all vertices with a positive dot product (vert.u, vert.v, 1) . sep_coeffs > 0
    • Group 2 contains both splitting points and all vertices with a negative dot product (vert.u, vert.v, 1) . sep_coeffs < 0

A linear separator is identical in form to a linear gradients (3 coefficients), and the separation line can be visualized as a region of space where the corresponding gradient is 0, while each of the surrounding half-planes holds either positive or negative gradient values.

  1. Attribute a region id to each group (I just use the region id of the first vertex in each group).
  2. Compute the gradient coefficients of each group using the method described in part I with extended homogeneous coordinates: (u, v, 1, u*v)
  3. Encode the 3 sets of coefficients (gradients for both groups and separator) and store the encoded coefficients, scales and region ids as described earlier.

II.2.2.2. Node setup

The node setup does the following:

  1. Decode the 3 sets of coefficients (2 bilinear gradients, 1 linear separator).
  2. Apply each of these sets to the current location in Data space (dot product with extended homogeneous coordinates).
  3. Decode the 2 region ids and create a region mask for each (the mask here is the binary value of the equality with a region id provided as user input).
  4. Use the output of the linear separator as a switch to select a pair, either (gradient 1, region 1) or (gradient 2, region 2).

Steps 1 to 3 can be seen in the following picture:

Step 4 can be seen in the following picture:

The following pictures illustrate the various concepts that I have just presented on a part of the clothing proxymesh where a shell border between regions crosses through multiple quads.

Here is an isolated view of the shell:

In this color-coded view of the regions selected by the linear separator, the masking region id used as input is the one for the right arm, and region 1 and region 2 are defined independently for each quad as explained previously:

In this next picture, I overlay the band containing the upper values of the gradient in this region:

Finally, adding the textures and the other regions:

If you look closely, the supposedly linear separator seems “broken” in some of the quads. I am not completely sure, but this could be due to Blender’s triangulation:

II.3. Conclusion

In these last posts (part I and part II) I tried to explain the datamap technique that I use to approximate gradients in Mutatis’s shaders.

The advantages compared to a bitmap (traditional) 4k high precision gradient map are:

  • No aliasing (staircase effect) regardless of the viewing distance.
  • Reduced data file size (200 Mb → 500 kb).

The disadvantages are:

  • Complexity of data generation and of node setup.
  • Doesn’t work in Eevee (I don’t know why).

In addition to the improvements presented here, I am putting the finishing touches on a new clothing shell that should be able to hold 2 independent gradients. This will add a couple of (small) data files but it should then be possible to add more details to the clothing (like seams).