Wrinkle maps are possible kind of...

Wrinkle maps can be done by mixing materials with nodes and using the lamp data node as an intensity factor. Watch in HD:

I did this by animating each control lamp and then triggering them with keyboard sensors. So obviously not the best situation. I can turn on and off the sun lamp’s energy and color using python, but that is not gradual. I would need the control lamps energy and color to be tied to something else.

I think the simplest solution would be to get the values of the shape key drivers I use that are tied to bone movements in the face. And then somehow increase or decrease the values of the lamp based on the values of the drivers. If driver values cannot be gotten then perhaps bone location can? If not bone location then the location of bone parented objects.

I’m not nearly good enough at python right now to figure this out on my own, so I’m asking the community for help on this.

I’m an artist have mercy!

Simple nodes:

I use very intense normals, otherwise they don’t show well. The lamp intensity only goes to .5 so that no one normal map will overcome the others that way it mixes good. Hope that makes sense.
/uploads/default/original/4X/4/4/3/443039ce5173aabe1065e9f674591e87518460c1.pngstc=1

Attachments



OMG ! it looks F-ing Amazing!

I understand shape key and python but I’ve no idea where the node fit in this setup, I can’t do much without understanding what is going on but I’d advise against integral of your bone pos to your lamp cos(nvm wut it means) unless there are some factor that bounds with the distance between your objects

Since shape key can be control with property, an integral of this keframe(integer) to your lamp.energy(float) would be much straight forward than a vector

The lamp color is used to mix in different normal maps guaramarx,

Using object color might be better than lamps…
But anyway, how about making a fully wrinkled texture, and then use masks to mix that with a fully unwrinkled texture?

I think he is mixing between several wrinkle patterns like surprise, scowl and something else…

Cool!!! keep up the good work!!
By the way the face is creepy :stuck_out_tongue:

In that case I don’t think the lower performance of multiple materials will be a problem. I assume the game is going to be zoomed in close during any event which would make wrinkles evident, so everything else will be excluded from the shot.

I Updated the first post

Shape key values don’t seem to always match the way in which wrinkles appear imo, e.g, 50% cheek up won’t have 50% wrinkles with crows feet. It might take 80% cheek up to get 50% crows feet showing. But if I could get it working with just a property of the shapekeys that would be a great thing as long as it would work with my setup/pipeline, which is:

Collect mocap data>tie facial bones to mocap data> Drive shapekeys with bones(shapekey drivers)> Simple Logic trigger to play armature animation in game

I can only switch lights on and off with my code, so not much good.

import bge

scene = bge.logic.getCurrentScene()
cont = cont= bge.logic.getCurrentController()
Face_Control = cont.owner
Light_1 = scene.objects[‘Light_1’]
Trigger = Face_Control.sensors[“Trigger”]

if Trigger.positive:
Light_1.color = [1.0, 1.0, 1.0]
Light_1.energy = 1.0

else:
Light_1.color = [0.0, 0.0, 0.0]
Light_1.energy = 0.0

Can someone tell me how to link a property with a shape key?

This was like the creepiest test video I’ve seen, but really good job! Details like this go into modern games.

if you make an animation and the keyframes control what amount a shape key is on, you do this:
animate the shape key so its in an ‘action’ on the armature

make an action actuator controllig that action

go to the action actuator, change it to property controlled, and then whatever number is in the property will set the animation to that frame. i think that’s what youre asking for

so if you say
property_1 = 5

and the action actuator’s property is propety_1
the animation will go to frame 5

on input from sensor or function:

logic.sendMessage(‘happy,’’,‘name_of_head_object_that_can_express’)

on the head object’s script:

if message.positive and ‘happy’ in message.subjects:
property = 5 #or wherever it starts

turn_on_light()
cont.activate(face_action_actuator)

1000h - I don’t think that will work. The character’s shape keys are driven by individual bone movements in the z and x locations. In the video there is no action actuator on the mesh head, the armature is the only one with the action actuator.

I want to put markers on my face (1 for left cheek, 1 for right eyebrow, etc.) and capture a long video of me talking and all my facial expressions. Then I want to get the motion tracking data from Blender and tie those to individual bones in the face(These bones are only shape key controllers; the mesh is not weighted to them). I’ve tried this technique(not in the video) and it works; the process allows me to not have to set up actions for “happy”, or “sad”, and just do a performance on camera. It would take forever to make shape key animations for all the facial movements that are tied to dialogue in a game.

I’m looking for a way to get the individual bone’s location in local space and then take those coordinates, and tie it to the the lamp controller’s( which controls the lamp data node in the material nodes that mixes in the normal maps) energy and color with a python script. That way 1 script will take car of any and all expressions when it comes to wrinkles.

But if it’s possible to get the actual shape key driver values from the mesh head and tie those values to the energy and color of the control lamp that would probably be best.

If I’m not making sense someone tell me and I’ll try to explain in a different way.

Dude, that is freaking awesome! : D I’ve been wanting to see this level of detail in games for years!
Kudos!
Keep at it!