Human Progress

Thanks @shalom

Other than to use Cursor to Selected and look at the 3D Cursor Location in the “N” panel, I actually don’t know myself how to access the world coordinates of a bone (there are still some glaring gaps in my Blender knowledge). @Anyone…?

You can easily export an .obj from Blender.

[quote=“ChrisJones, post:1104, topic:1143224”]
Other than to use Cursor to Selected and look at the 3D Cursor Location in the “N” panel, I actually don’t know myself how to access the world coordinates of a bone (there are still some glaring gaps in my Blender knowledge). @Anyone…?
[/quote] @particular

Select the Bone in edit Mode…
N-Panel under ITEM

2 Likes

Ah yes forgot about that. I assume @particular wants the posed bone coords though (?).

Yeah, I figured that also…the only way is your solution or by snapping an empty to the bone and read its location, etc…
There were some old Python Scripts but nothing updated to 2.8 +

@ChrisJones have you visited this back by any chance? [I didn’t follow this thread for a long while so I don’t know if you did]

it’s for the tension maps, seems to be updated

1 Like

I don’t think it’s addressed the issues I mentioned here since I last looked, so still not really viable for face wrinkles. Apparently Animation Nodes does some form of tension mapping but I haven’t had a chance to look into that.

5 Likes

yeah, I ended up giving up on waiting for an addon to come together and going for hooking up masks to drivers myself. I think even if there was official support for vert calculated tension, that might still be the best way to go.

1 Like

As mentioned (somewhere, I think…), the base mesh is based on front and profile composite photos, in an effort to arrive at something of a neutral average. Inevitably something was lost in the translation to 3D, and this has been throwing off my ability to discern whether the textures and surface details are holding up.

Since I don’t have access to scanners or a multitude of willing subjects to scan, the only alternative I could think of was to make a multitude of individual likenesses, each hand-matched to photos from multiple angles, and blended together into a true 3D composite. To get a relatively unbiased result, I determined this would require at least three males and three females for each of the three races. Without access to camera information, pupillary distance or any other measurements, even one likeness would be a chore let alone 18 of them. Only a nutcase would put themselves through that kind of gruelling ordeal.

Several weeks later…

Despite models and celebrities being the most logical candidates due to the abundance of photos available, I spent half the time hunting for decent close-ups (profiles in particular) with the same relaxed, neutral expression. A preference for ones with close to “ideal” proportions reduced my options even further.

Once I had found enough photos to proceed with a likeness, I attached them as translucent Image empties to cameras, and positioned them in the ballpark of where the photo might have been taken, using the eyes for calibration. Then I’d start sculpting, moving from camera to camera as matching progressed, correcting their positions and focal lengths as the accuracy improved.

Once they were all done I had to figure out a formula for blending them, which turned out to be as simple as mixing the first two 50/50, then each consecutive one by the amount they contribute to the overall pool (0.333 for the third, 0.25 for the fourth and so on).

It was indeed an arduous, frustrating task, but for movie/fashion projects in particular I think this might actually have been an even better idea than using scans, since the base mesh now has top actors and models in its DNA… :wink:

25 Likes

That’s amazing! I love the idea of a neutral face. Perhaps the key to studying human anatomy also lies in reconstructing it…

1 Like

God, this is amazing work.

Also, you poor insane bastard.

Hands down the finest example of realism I’ve seen of a CG human, and that video of the eye was especially insane. This is exactly the sort of thing I wish Blender had by default, something to take the crown away from MetaHuman Creator.

Absolutely incredible. I’ve the deepest respect for your obvious obsession with quality and detail :+1:

5 Likes

Hi Chris, found myself in this thread after your latest update and wanted to express my appreciation for your work; your dedication is inspiring :slight_smile:

I also wanted to confirm the issues you mentioned about the tension maps that you believe have not been addressed (I rewrote and extended the add-on over in the other thread). I believe the issues you had mentioned in your post pertained to creasing, and sensitivity. The creasing was reported to be fixed in 2.91, and I added a feature for spreading stretch and compression independently that might partially address the other problem. I am going to try and maintain the add-on, so it would be nice to know if you are referring to any other issues? Thanks!

8 Likes

Hey Chris, I wanted to ask how you managed to get the specular highlights in the “closerest” shot to look so realistic! Any insight? :smiley:

Thanks all. :slight_smile:

@calanir I keep losing track of this, but last I remember it seemed like the linear falloff between weights was responsible for the creases. Did something change in Blender along those lines (or was it not that after all)? Good to hear the add-on is still being worked on, I’ll check it out next time I get a chance.

@AlanIhsan I don’t think I did anything special there, although I’m using Roughness in Principled BSDF for reflectivity rather than Specular. Otherwise it’s probably more to do with having a light on the opposite side of the object pointing towards the camera, and the shallow DOF.

Hey @ChrisJones do you know when the tearline will be added to the gumroad model ? also is there any update on the face textures you could sell for the model ?

Oh and i also have a question, did you put some thickness on the tearline ? Or is it just a plane (I mean an infinitely thin surface) ?

I just thought about something, what if you just make by yourself the displacement, roughness and bump textures for the realism and match the uvs to daz/cc3 uvs, so people who uses your rig+textures can just use a diffuse map from daz/cc3, that would make it easier to comercialize wouldn’t it ? and also that makes it easier for you so you don’ have to make different diffuse maps , for different skin colors.
we could still use custom maps anyways for further modifications

I’m afraid I haven’t made a lot of progress with the textures since my last report, as I’ve mainly been preparing the base mesh for new UVs, which need to be locked in before I can lay another hand on the texture maps.

The tear line is waiting until I’ve decided how to distribute it, which will probably be after I’ve decided how to distribute the textures.

It’s infinitely thin.

I’ve considered the possibility of alternative UV layouts for third party textures, but recently discovered that each additional UV map affects viewport performance quite a bit (strangely enough). Aside from that, I’m trying not to get too caught up in spending time on things specifically to make it more marketable, but rather on just making the thing as good as it can be in its own right (which necessitates custom textures). It might be nice to have a place where others could contribute those sorts of expansions though.

You might want to just render out some diffuse passes of your Daz/CC3 character of choice and look up “blender projection painting” on youtube to make textures yourself. Or use the Daz/CC3 textures as a “stencil” and texture in Blender normally.

Or you could try to make the uv layouts match Daz/CC3 yourself, but if you do, definitely don’t try to use blender’s uv tools to do it because they’re awful, and it’ll be a nightmare (but if you do want Daz/CC3 UVs for this character, Rizom3D has uv symmetry and pinning that is actually useful).

I think that wouldn’t be an issue though, since it could be easily solved by instructing users to delete the UVs they’re not using right? I mean, taking the time to try to match the UVs perfectly to someone else’s would be hellish and slow, even in software that’s good at UVs (definitely don’t try it in Blender!), so I’m not suggesting you actually try to do that, I only bring it up because I did two UV sets for my own project (with my own model) so I can choose between UDIMs or a traditional layout.